Instead, I’ve analyzed 1,321 Tweets to answer a question many of us pandemic-bound remote-workers have wondered since Zoom became part of our daily lives: Do people like my room?!
Unlike Animal Crossing, there’s no authoritative raccoon we can rely on for objective feedback about our decoration skills.
Instead, here in the real world, the closest thing we’ve got is Room Rater (@ratemyskyperoom). As more and more (famous) people are revealing their homes via the laptop lens, Room Rater has stepped up to judge them, publicly and quantitatively.
Not all of our homes will be broadcast on national TV. At least not in the near future. But we can all agree, when that day comes, we want the world to see our rooms (and, by extension, our very beings) as worthy of a 10/10.
So I ask: “What does it take to get a 10/10 rating for my room?!”
To find out, I pulled down all of @ratemyskyperoom’s 1,321 room rating tweets from May 2020 to July 2020, parsed out the ratings, then looked at the content of both the images and text for each of their tweets.
Below are the critical, profound and stirring insights I’ve found in the data.
(The interactive version of this post is here: Room Rating Stats Notebook)
% Distribution of All Room Ratings per Score (src). The average rating is 7.5/10.
The good news: It’s apparently not hard to get a high score. The average rating is 7.5/10 and they hand out 8/10s like candy. In fact, _most of the ratings _are at least 8/10.
Room Rater talks a tough game, but deep down they’re softies.
![A visualization of room rating changes over time, for individual twitter usernames.]
This silly rendering shows changes in room ratings for individual twitter users, between their first rating and their last ratting (src). The left column shows speakers who have improved their room’s rating. The right column shows speakers who have fallen from grace.
Even if your first rating is low, you’ve always got another chance on your next TV appearance. At least 83 people’s rooms have been rated on more than one occasion.
Whether you come from the hard streets of Scranton or Sesame (@JoeBiden +1, @elmo +1), whether you’re a politician, press, pollster or professor (@RepKarenBass +2; @marycjordan +3, @FrankLuntz +3; @j_g_allen +4), Room Rater is willing to give your room a second chance. Above, you can see the 14 people on the left who improved their rating by at least 3 points between their first and last appearances. The awards for most improvement go to @anitakumar01 and @RosenJeffrey (+5 each).
#data-science #remote-working #design #data analysis
Static code analysis refers to the technique of approximating the runtime behavior of a program. In other words, it is the process of predicting the output of a program without actually executing it.
Lately, however, the term “Static Code Analysis” is more commonly used to refer to one of the applications of this technique rather than the technique itself — program comprehension — understanding the program and detecting issues in it (anything from syntax errors to type mismatches, performance hogs likely bugs, security loopholes, etc.). This is the usage we’d be referring to throughout this post.
“The refinement of techniques for the prompt discovery of error serves as well as any other as a hallmark of what we mean by science.”
We cover a lot of ground in this post. The aim is to build an understanding of static code analysis and to equip you with the basic theory, and the right tools so that you can write analyzers on your own.
We start our journey with laying down the essential parts of the pipeline which a compiler follows to understand what a piece of code does. We learn where to tap points in this pipeline to plug in our analyzers and extract meaningful information. In the latter half, we get our feet wet, and write four such static analyzers, completely from scratch, in Python.
Note that although the ideas here are discussed in light of Python, static code analyzers across all programming languages are carved out along similar lines. We chose Python because of the availability of an easy to use
ast module, and wide adoption of the language itself.
Before a computer can finally “understand” and execute a piece of code, it goes through a series of complicated transformations:
As you can see in the diagram (go ahead, zoom it!), the static analyzers feed on the output of these stages. To be able to better understand the static analysis techniques, let’s look at each of these steps in some more detail:
The first thing that a compiler does when trying to understand a piece of code is to break it down into smaller chunks, also known as tokens. Tokens are akin to what words are in a language.
A token might consist of either a single character, like
(, or literals (like integers, strings, e.g.,
Bob, etc.), or reserved keywords of that language (e.g,
def in Python). Characters which do not contribute towards the semantics of a program, like trailing whitespace, comments, etc. are often discarded by the scanner.
Python provides the
tokenize module in its standard library to let you play around with tokens:
code = b"color = input('Enter your favourite color: ')"
for token in tokenize.tokenize(io.BytesIO(code).readline):
TokenInfo(type=62 (ENCODING), string='utf-8')
TokenInfo(type=1 (NAME), string='color')
TokenInfo(type=54 (OP), string='=')
TokenInfo(type=1 (NAME), string='input')
TokenInfo(type=54 (OP), string='(')
TokenInfo(type=3 (STRING), string="'Enter your favourite color: '")
TokenInfo(type=54 (OP), string=')')
TokenInfo(type=4 (NEWLINE), string='')
TokenInfo(type=0 (ENDMARKER), string='')
(Note that for the sake of readability, I’ve omitted a few columns from the result above — metadata like starting index, ending index, a copy of the line on which a token occurs, etc.)
#code quality #code review #static analysis #static code analysis #code analysis #static analysis tools #code review tips #static code analyzer #static code analysis tool #static analyzer
Have you ever visited a restaurant or movie theatre, only to be asked to participate in a survey? What about providing your email address in exchange for coupons? Do you ever wonder why you get ads for something you just searched for online? It all comes down to data collection and analysis. Indeed, everywhere you look today, there’s some form of data to be collected and analyzed. As you navigate running your business, you’ll need to create a data analytics plan for yourself. Data helps you solve problems , find new customers, and re-assess your marketing strategies. Automated business analysis tools provide key insights into your data. Below are a few of the many valuable benefits of using such a system for your organization’s data analysis needs.
#big data #latest news #data analysis #streamline your data analysis #automated business analysis #streamline your data analysis with automated business analysis
Time series analysis is the backbone for many companies since most businesses work by analyzing their past data to predict their future decisions. Analyzing such data can be tricky but Python, as a programming language, can help to deal with such data. Python has both inbuilt tools and external libraries, making the whole analysis process both seamless and easy. Python’s Panda s library is frequently used to import, manage, and analyze datasets in various formats. However, in this article, we’ll use it to analyze stock prices and perform some basic time-series operations.
#data-analysis #time-series-analysis #exploratory-data-analysis #stock-market-analysis #financial-analysis #getting started with time series using pandas
Bayesian analysis offers the possibility to get more insights from your data compared to the pure frequentist approach. In this post, I will walk you through a real life example of how a Bayesian analysis can be performed. I will demonstrate what may go wrong when choosing a wrong prior and we will see how we can summarize our results. For you to follow this post, I assume you are familiar with the foundations of Bayesian statistics and with Bayes’ theorem.
As an example analysis, we will discuss a real life problem from a physics lab. No worries, you don’t need any physics knowledge for that. We want to determine the efficiency of a particle detector. A particle detector is a sensor that may produce a measurable signal when certain particles traverse it. The efficiency of the detector we want to evaluate is the chance that the detector actually measures the traversing particle. In order to measure this, we put the detector that we want to evaluate in between two other sensors in a sandwich-like structure. If we measure a signal in the top and bottom sensors we know that a particle should have also traversed the detector in the middle. A picture of the experimental setup is shown below.
We want to measure the efficiency of a particle detector (device under test). Two different sensors (triggers) are placed on top and below the detector in order to detect particles traversing the setup (in this case muons µ).
For the measurement, we count the number of traversing particles N in a certain time (as reported by the top and bottom sensors) as well as the number of signals measured in our detector r. For this example, we assume N=100 and r=98.
#confidence-interval #data-analysis #physics #bayesian-analysis #prior #data analysis
Tableau is one of the most powerful and popular Data Analysis Tool. Tableau holds almost 14% market of Business Analysis software industries with highest satisfied customer base.
This is one of Tableau Tips and Trick Series, Where I will tell you some of the timesaving shortcuts for data analysis.
I have a few tricks and tips that every tableau user can apply to make their analysis much faster and efficient. So let’s get started.
It is an amazing tool that allows you to just copy your data and paste it for much faster analysis. (Windows: Ctrl + C for Copy, Ctrl + V for Paste) (Mac: Command + C for Copy, Command + V for Paste)
Tableau allows drag and drops feature to flat files for Tableau data extracts connections.
Right-click when dragging the measures/dimensions for analysis into the view for quick aggregation options.
To repeat and reuse a measure that already exists in the display, keep down the CTRL key (Command for Mac) when moving it.
Tableau is a powerful and intelligent tool where instead of typing Zeros in large numbers (Thousands or Millions), we can use denominators K and M(K-Thousands, M-Millions).
Use “Ctrl+Shift+Area Selection” to zoom in quickly and easily while looking into huge time data.
For swapping the measures in view follow these steps: “Analysis(Menu Bar)” — “Cycle fields”.
Use the “ESC” key to remove all the filters applied on the Analytics Dashboard to revert it Initial State.
Menu bar → “Format” → “Format Dashboard” to Change and Titles and Text in the Dashboard
Tableau has a huge community where thousands of analysis enthusiasts answer questions and publish their own dashboard to the public for reference and use. Making a dashboard from scratch requires a lot of time and tons of creativity but if we are able to align our data with publicly available dashboards then our work is reduced to half.
I recommend taking look into it. Choose the one dashboard you like and try to align your analytical data in a creative manner.
#data-analysis #data-analytics #business-analysis #tableau #data-science #data analysis