Any reason not to pass request / response as a parameter?

In express I have a handler for a route ex:

router.get(`${api}/path/:params/entrypoint`, routeHandler);

In this example 'routeHandler' function has a lot of logic doing various things. I'd like to break 'routeHandler' into smaller methods to ease readability and testability. So instead of:

routeHandler(req, res) {
    //many lines of code
}

We could have:

routeHandler(req, res) {
    helperOne(req, res);
    helperTwo(req, res);
}

helperOne(req, res) {
//do stuff
}

helper2(req, res) {
//do stuff
}

I am being told not to do this by a coworker who is pretty senior, but I do not understand why. Does anyone know of any issues that can arise by passing the response or request objects into helpers? I can not think of any and google isn’t revealing any clear answer.

Thanks!

#javascript #node-js #express

What is GEEK

Buddha Community

Watts Kendall

1550022881

Yes you may run into some problems when passing those parameters, especially res. For example you may res.send multiple times (one in each function) which will raise an exception.

Scenario

A more concrete example is this

routeHandler((req, res) => {
    helperOne(req, res);
    helperTwo(req, res);
});

Based on some conditions, I want to stop and return an error from helperOne and not go execute any code from helperTwo. My definitions of these functions are like this

helperOne = (req, res) => {
    const dataPoint = req.body.dataPoint; // a number for example
    if (number > 10) {
        return res.send("This is not valid. Stopping here...");
    } else {
        console.log("All good! Continue..");
    }
}

helperTwo = (req, res) => {
    res.send("Response from helperTwo");
}

Then let’s say I do have req.body.dataPoint = 10, and I’m now expecting my routeHandler to stop after the return res.send in the first block of my if statement in helperOne.

This will not work as expected though, because the return will concern only helperOne which is the returning function. In other terms it won’t propagate to routeHandler.

In the end an exception will be raised because routeHandler will call helperTwo and try to send a response again.

Solution

  • Don’t send req or res. Just pass the data you need and handle the reponse in your mainhandler
  • An even better alternative is to use Express middlewares. Since you have multiple *“sequential”*handlers, you can chain multiple middlewares, which is closer to the standard Express.JS way
Tyrique  Littel

Tyrique Littel

1600549200

Is Java “pass by value” or “pass by reference” ?

Friends , in this article I’ll be explaining about whether Java uses pass by value or pass by reference ? After reading this article you will be able to grasp the concept about pass by value and pass by reference .While reading this article try to focus on example codes and associated comments .

Java is always pass by value and not pass by reference .

What does pass by value actually mean ?

As the name says , pass by value simply means , we pass the value of an Object and not actual/original reference variable of an Object from main function to the called function but the copy of reference variable of this object is passed to the called function which ultimately points to same object .

#pass-by-value #pass-by-reference #pass-by-value-in-java #pass-by-ref-vs-value #pass-by-reference-in-java

Udit Vashisht

1589355169

Requests Python 3 - Download Files (Free books) with requests-html and requests Python 3

In this video, we will use requests python 3 and requests-html to download pdf files from Springer’s Website.
Recently, I came across a list of 408 free books available for download from Springer’s website.
So, I have created this script in which I have used requests python and requests-html to download the files.

https://youtu.be/UMuO2_BVFwY

#request-html #requests #requests-python #webscrapping #springer

Greatest Reason Bitcoin Passes $60,000 (2021 Bitcoin Bull Run Not Over)

Around the Blockchain is your favorite Cryptocurrency show discussing Bitcoin, Ethereum, Cardano, and the top altcoins. Our four crypto experts include CryptoWendyO, Blockchain Boy, Arcane Bear, and Ben Armstrong. Tune in for their insightful crypto analysis!

Today we’ll be discussing Millennials’ increasing power in the investment realm. We are talking about Africa possibly following El Salvador’s footsteps with Bitcoin adoption. Finally, we reveal some very bullish news for Bitcoin and Ethereum.
📺 The video in this post was made by BitBoy Crypto
The origin of the article: https://www.youtube.com/watch?v=uQXDZL0sjE4
🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#bitcoin #blockchain #bitcoin bull #greatest reason bitcoin passes $60,000 #greatest reason bitcoin passes $60,000 (2021 bitcoin bull run not over) #2021

Hoang  Ha

Hoang Ha

1657764000

Cách Thu Thập Dữ Liệu Từ Twitter Bằng Tweepy Và Snscrape


You can use the data you can get from social media in a number of ways, like sentiment analysis (analyzing people's thoughts) on a specific issue or field of interest.

There are several ways you can scrape (or gather) data from Twitter. And in this article, we will look at two of those ways: using Tweepy and Snscrape.

We will learn a method to scrape public conversations from people on a specific trending topic, as well as tweets from a particular user.

Now without further ado, let’s get started.

Tweepy vs Snscrape – Introduction to Our Scraping Tools

Now, before we get into the implementation of each platform, let's try to grasp the differences and limits of each platform.

Tweepy

Tweepy is a Python library for integrating with the Twitter API. Because Tweepy is connected with the Twitter API, you can perform complex queries in addition to scraping tweets. It enables you to take advantage of all of the Twitter API's capabilities.

But there are some drawbacks – like the fact that its standard API only allows you to collect tweets for up to a week (that is, Tweepy does not allow recovery of tweets beyond a week window, so historical data retrieval is not permitted).

Also, there are limits to how many tweets you can retrieve from a user's account. You can read more about Tweepy's functionalities here.

Snscrape

Snscrape is another approach for scraping information from Twitter that does not require the use of an API. Snscrape allows you to scrape basic information such as a user's profile, tweet content, source, and so on.

Snscrape is not limited to Twitter, but can also scrape content from other prominent social media networks like Facebook, Instagram, and others.

Its advantages are that there are no limits to the number of tweets you can retrieve or the window of tweets (that is, the date range of tweets). So Snscrape allows you to retrieve old data.

But the one disadvantage is that it lacks all the other functionalities of Tweepy – still, if you only want to scrape tweets, Snscrape would be enough.

Now that we've clarified the distinction between the two methods, let's go over their implementation one by one.

How to Use Tweepy to Scrape Tweets

Before we begin using Tweepy, we must first make sure that our Twitter credentials are ready. With that, we can connect Tweepy to our API key and begin scraping.

If you do not have Twitter credentials, you can register for a Twitter developer account by going here. You will be asked some basic questions about how you intend to use the Twitter API. After that, you can begin the implementation.

The first step is to install the Tweepy library on your local machine, which you can do by typing:

pip install git+https://github.com/tweepy/tweepy.git

How to Scrape Tweets from a User on Twitter

Now that we’ve installed the Tweepy library, let’s scrape 100 tweets from a user called john on Twitter. We'll look at the full code implementation that will let us do this and discuss it in detail so we can grasp what’s going on:

import tweepy

consumer_key = "XXXX" #Your API/Consumer key 
consumer_secret = "XXXX" #Your API/Consumer Secret Key
access_token = "XXXX"    #Your Access token key
access_token_secret = "XXXX" #Your Access token Secret key

#Pass in our twitter API authentication key
auth = tweepy.OAuth1UserHandler(
    consumer_key, consumer_secret,
    access_token, access_token_secret
)

#Instantiate the tweepy API
api = tweepy.API(auth, wait_on_rate_limit=True)


username = "john"
no_of_tweets =100


try:
    #The number of tweets we want to retrieved from the user
    tweets = api.user_timeline(screen_name=username, count=no_of_tweets)
    
    #Pulling Some attributes from the tweet
    attributes_container = [[tweet.created_at, tweet.favorite_count,tweet.source,  tweet.text] for tweet in tweets]

    #Creation of column list to rename the columns in the dataframe
    columns = ["Date Created", "Number of Likes", "Source of Tweet", "Tweet"]
    
    #Creation of Dataframe
    tweets_df = pd.DataFrame(attributes_container, columns=columns)
except BaseException as e:
    print('Status Failed On,',str(e))
    time.sleep(3)

Now let's go over each part of the code in the above block.

import tweepy

consumer_key = "XXXX" #Your API/Consumer key 
consumer_secret = "XXXX" #Your API/Consumer Secret Key
access_token = "XXXX"    #Your Access token key
access_token_secret = "XXXX" #Your Access token Secret key

#Pass in our twitter API authentication key
auth = tweepy.OAuth1UserHandler(
    consumer_key, consumer_secret,
    access_token, access_token_secret
)

#Instantiate the tweepy API
api = tweepy.API(auth, wait_on_rate_limit=True)

In the above code, we've imported the Tweepy library into our code, then we've created some variables where we store our Twitter credentials (The Tweepy authentication handler requires four of our Twitter credentials). So we then pass in those variable into the Tweepy authentication handler and save them into another variable.

Then the last statement of call is where we instantiated the Tweepy API and passed in the require parameters.

username = "john"
no_of_tweets =100


try:
    #The number of tweets we want to retrieved from the user
    tweets = api.user_timeline(screen_name=username, count=no_of_tweets)
    
    #Pulling Some attributes from the tweet
    attributes_container = [[tweet.created_at, tweet.favorite_count,tweet.source,  tweet.text] for tweet in tweets]

    #Creation of column list to rename the columns in the dataframe
    columns = ["Date Created", "Number of Likes", "Source of Tweet", "Tweet"]
    
    #Creation of Dataframe
    tweets_df = pd.DataFrame(attributes_container, columns=columns)
except BaseException as e:
    print('Status Failed On,',str(e))

In the above code, we created the name of the user (the @name in Twitter) we want to retrieved the tweets from and also the number of tweets. We then created an exception handler to help us catch errors in a more effective way.

After that, the api.user_timeline() returns a collection of the most recent tweets posted by the user we picked in the screen_name parameter and the number of tweets you want to retrieve.

In the next line of code, we passed in some attributes we want to retrieve from each tweet and saved them into a list. To see more attributes you can retrieve from a tweet, read this.

In the last chunk of code we created a dataframe and passed in the list we created along with the names of the column we created.

Note that the column names must be in the sequence of how you passed them into the attributes container (that is, how you passed those attributes in a list when you were retrieving the attributes from the tweet).

If you correctly followed the steps I described, you should have something like this:

image-17

Image by Author

Now that we are done, let's go over one more example before we move into the Snscrape implementation.

How to Scrape Tweets from a Text Search

In this method, we will be retrieving a tweet based on a search. You can do that like this:

import tweepy

consumer_key = "XXXX" #Your API/Consumer key 
consumer_secret = "XXXX" #Your API/Consumer Secret Key
access_token = "XXXX"    #Your Access token key
access_token_secret = "XXXX" #Your Access token Secret key

#Pass in our twitter API authentication key
auth = tweepy.OAuth1UserHandler(
    consumer_key, consumer_secret,
    access_token, access_token_secret
)

#Instantiate the tweepy API
api = tweepy.API(auth, wait_on_rate_limit=True)


search_query = "sex for grades"
no_of_tweets =150


try:
    #The number of tweets we want to retrieved from the search
    tweets = api.search_tweets(q=search_query, count=no_of_tweets)
    
    #Pulling Some attributes from the tweet
    attributes_container = [[tweet.user.name, tweet.created_at, tweet.favorite_count, tweet.source,  tweet.text] for tweet in tweets]

    #Creation of column list to rename the columns in the dataframe
    columns = ["User", "Date Created", "Number of Likes", "Source of Tweet", "Tweet"]
    
    #Creation of Dataframe
    tweets_df = pd.DataFrame(attributes_container, columns=columns)
except BaseException as e:
    print('Status Failed On,',str(e))

The above code is similar to the previous code, except that we changed the API method from api.user_timeline() to api.search_tweets(). We've also added tweet.user.name to the attributes container list.

In the code above, you can see that we passed in two attributes. This is because if we only pass in tweet.user, it would only return a dictionary user object. So we must also pass in another attribute we want to retrieve from the user object, which is name.

You can go here to see a list of additional attributes that you can retrieve from a user object. Now you should see something like this once you run it:

image-18

Image by Author.

Alright, that just about wraps up the Tweepy implementation. Just remember that there is a limit to the number of tweets you can retrieve, and you can not retrieve tweets more than 7 days old using Tweepy.

How to Use Snscrape to Scrape Tweets

As I mentioned previously, Snscrape does not require Twitter credentials (API key) to access it. There is also no limit to the number of tweets you can fetch.

For this example, though, we'll just retrieve the same tweets as in the previous example, but using Snscrape instead.

To use Snscrape, we must first install its library on our PC. You can do that by typing:

pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git

How to Scrape Tweets from a User with Snscrape

Snscrape includes two methods for getting tweets from Twitter: the command line interface (CLI) and a Python Wrapper. Just keep in mind that the Python Wrapper is currently undocumented – but we can still get by with trial and error.

In this example, we will use the Python Wrapper because it is more intuitive than the CLI method. But if you get stuck with some code, you can always turn to the GitHub community for assistance. The contributors will be happy to help you.

To retrieve tweets from a particular user, we can do the following:

import snscrape.modules.twitter as sntwitter
import pandas as pd

# Created a list to append all tweet attributes(data)
attributes_container = []

# Using TwitterSearchScraper to scrape data and append tweets to list
for i,tweet in enumerate(sntwitter.TwitterSearchScraper('from:john').get_items()):
    if i>100:
        break
    attributes_container.append([tweet.date, tweet.likeCount, tweet.sourceLabel, tweet.content])
    
# Creating a dataframe from the tweets list above 
tweets_df = pd.DataFrame(attributes_container, columns=["Date Created", "Number of Likes", "Source of Tweet", "Tweets"])

Let's go over some of the code that you might not understand at first glance:

for i,tweet in enumerate(sntwitter.TwitterSearchScraper('from:john').get_items()):
    if i>100:
        break
    attributes_container.append([tweet.date, tweet.likeCount, tweet.sourceLabel, tweet.content])
    
  
# Creating a dataframe from the tweets list above 
tweets_df = pd.DataFrame(attributes_container, columns=["Date Created", "Number of Likes", "Source of Tweet", "Tweets"])

In the above code, what the sntwitter.TwitterSearchScaper does is return an object of tweets from the name of the user we passed into it (which is john).

As I mentioned earlier, Snscrape does not have limits on numbers of tweets so it will return however many tweets from that user. To help with this, we need to add the enumerate function which will iterate through the object and add a counter so we can access the most recent 100 tweets from the user.

You can see that the attributes syntax we get from each tweet looks like the one from Tweepy. These are the list of attributes that we can get from the Snscrape tweet which was curated by Martin Beck.

Sns.Scrape

Credit: Martin Beck

More attributes might be added, as the Snscrape library is still in development. Like for instance in the above image, source has been replaced with sourceLabel. If you pass in only source it will return an object.

If you run the above code, you should see something like this as well:

image-19

Image by Author

Now let's do the same for scraping by search.

How to Scrape Tweets from a Text Search with Snscrape

import snscrape.modules.twitter as sntwitter
import pandas as pd

# Creating list to append tweet data to
attributes_container = []

# Using TwitterSearchScraper to scrape data and append tweets to list
for i,tweet in enumerate(sntwitter.TwitterSearchScraper('sex for grades since:2021-07-05 until:2022-07-06').get_items()):
    if i>150:
        break
    attributes_container.append([tweet.user.username, tweet.date, tweet.likeCount, tweet.sourceLabel, tweet.content])
    
# Creating a dataframe to load the list
tweets_df = pd.DataFrame(attributes_container, columns=["User", "Date Created", "Number of Likes", "Source of Tweet", "Tweet"])

Again, you can access a lot of historical data using Snscrape (unlike Tweepy, as its standard API cannot exceed 7 days. The premium API is 30 days.). So we can pass in the date from which we want to start the search and the date we want it to end in the sntwitter.TwitterSearchScraper() method.

What we've done in the preceding code is basically what we discussed before. The only thing to bear in mind is that until works similarly to the range function in Python (that is, it excludes the last integer). So if you want to get tweets from today, you need to include the day after today in the "until" parameter.

image-21

Image of Author.

Now you know how to scrape tweets with Snscrape, too!

When to use each approach

Now that we've seen how each method works, you might be wondering when to use which.

Well, there is no universal rule for when to utilize each method. Everything comes down to a matter preference and your use case.

If you want to acquire an endless number of tweets, you should use Snscrape. But if you want to use extra features that Snscrape cannot provide (like geolocation, for example), then you should definitely use Tweepy. It is directly integrated with the Twitter API and provides complete functionality.

Even so, Snscrape is the most commonly used method for basic scraping.

Conclusion

In this article, we learned how to scrape data from Python using Tweepy and Snscrape. But this was only a brief overview of how each approach works. You can learn more by exploring the web for additional information.

I've included some useful resources that you can use if you need additional information. Thank you for reading.

 Source: https://www.freecodecamp.org/news/python-web-scraping-tutorial/

#python #web 

Como Extrair Dados Do Twitter Usando Tweepy E Snscrape

Se você é um entusiasta de dados, provavelmente concordará que uma das fontes mais ricas de dados do mundo real são as mídias sociais. Sites como o Twitter estão cheios de dados.

Você pode usar os dados obtidos nas mídias sociais de várias maneiras, como análise de sentimentos (analisando os pensamentos das pessoas) sobre um assunto ou campo de interesse específico.

Existem várias maneiras de extrair (ou coletar) dados do Twitter. E neste artigo, veremos duas dessas maneiras: usando o Tweepy e o Snscrape.

Aprenderemos um método para extrair conversas públicas de pessoas sobre um tópico de tendência específico, bem como tweets de um usuário específico.

Agora sem mais delongas, vamos começar.

Tweepy vs Snscrape – Introdução às nossas ferramentas de raspagem

Agora, antes de entrarmos na implementação de cada plataforma, vamos tentar entender as diferenças e os limites de cada plataforma.

Tweepy

Tweepy é uma biblioteca Python para integração com a API do Twitter. Como o Tweepy está conectado à API do Twitter, você pode realizar consultas complexas além de extrair tweets. Ele permite que você aproveite todos os recursos da API do Twitter.

Mas existem algumas desvantagens – como o fato de que sua API padrão só permite coletar tweets por até uma semana (ou seja, o Tweepy não permite a recuperação de tweets além de uma janela de semana, portanto, a recuperação de dados históricos não é permitida).

Além disso, há limites para quantos tweets você pode recuperar da conta de um usuário. Você pode ler mais sobre as funcionalidades do Tweepy aqui .

Snscrape

Snscrape é outra abordagem para extrair informações do Twitter que não requer o uso de uma API. O Snscrape permite extrair informações básicas, como o perfil de um usuário, conteúdo do tweet, fonte e assim por diante.

O Snscrape não se limita ao Twitter, mas também pode extrair conteúdo de outras redes sociais proeminentes, como Facebook, Instagram e outros.

Suas vantagens são que não há limites para o número de tweets que você pode recuperar ou a janela de tweets (ou seja, o intervalo de datas dos tweets). Então Snscrape permite que você recupere dados antigos.

Mas a única desvantagem é que ele não possui todas as outras funcionalidades do Tweepy – ainda assim, se você quiser apenas raspar tweets, o Snscrape seria suficiente.

Agora que esclarecemos a distinção entre os dois métodos, vamos analisar sua implementação um por um.

Como usar o Tweepy para raspar tweets

Antes de começarmos a usar o Tweepy, devemos primeiro ter certeza de que nossas credenciais do Twitter estão prontas. Com isso, podemos conectar o Tweepy à nossa chave de API e começar a raspar.

Se você não tiver credenciais do Twitter, poderá se registrar para uma conta de desenvolvedor do Twitter acessando aqui . Serão feitas algumas perguntas básicas sobre como você pretende usar a API do Twitter. Depois disso, você pode começar a implementação.

O primeiro passo é instalar a biblioteca Tweepy em sua máquina local, o que você pode fazer digitando:

pip install git+https://github.com/tweepy/tweepy.git

Como raspar tweets de um usuário no Twitter

Agora que instalamos a biblioteca Tweepy, vamos extrair 100 tweets de um usuário chamado johnno Twitter. Veremos a implementação completa do código que nos permitirá fazer isso e discutiremos em detalhes para que possamos entender o que está acontecendo:

import tweepy

consumer_key = "XXXX" #Your API/Consumer key 
consumer_secret = "XXXX" #Your API/Consumer Secret Key
access_token = "XXXX"    #Your Access token key
access_token_secret = "XXXX" #Your Access token Secret key

#Pass in our twitter API authentication key
auth = tweepy.OAuth1UserHandler(
    consumer_key, consumer_secret,
    access_token, access_token_secret
)

#Instantiate the tweepy API
api = tweepy.API(auth, wait_on_rate_limit=True)


username = "john"
no_of_tweets =100


try:
    #The number of tweets we want to retrieved from the user
    tweets = api.user_timeline(screen_name=username, count=no_of_tweets)
    
    #Pulling Some attributes from the tweet
    attributes_container = [[tweet.created_at, tweet.favorite_count,tweet.source,  tweet.text] for tweet in tweets]

    #Creation of column list to rename the columns in the dataframe
    columns = ["Date Created", "Number of Likes", "Source of Tweet", "Tweet"]
    
    #Creation of Dataframe
    tweets_df = pd.DataFrame(attributes_container, columns=columns)
except BaseException as e:
    print('Status Failed On,',str(e))
    time.sleep(3)

Agora vamos examinar cada parte do código no bloco acima.

import tweepy

consumer_key = "XXXX" #Your API/Consumer key 
consumer_secret = "XXXX" #Your API/Consumer Secret Key
access_token = "XXXX"    #Your Access token key
access_token_secret = "XXXX" #Your Access token Secret key

#Pass in our twitter API authentication key
auth = tweepy.OAuth1UserHandler(
    consumer_key, consumer_secret,
    access_token, access_token_secret
)

#Instantiate the tweepy API
api = tweepy.API(auth, wait_on_rate_limit=True)

No código acima, importamos a biblioteca Tweepy para nosso código e criamos algumas variáveis ​​nas quais armazenamos nossas credenciais do Twitter (o manipulador de autenticação do Tweepy requer quatro de nossas credenciais do Twitter). Então, passamos essas variáveis ​​para o manipulador de autenticação Tweepy e as salvamos em outra variável.

Em seguida, a última instrução de chamada é onde instanciamos a API do Tweepy e passamos os parâmetros require.

username = "john"
no_of_tweets =100


try:
    #The number of tweets we want to retrieved from the user
    tweets = api.user_timeline(screen_name=username, count=no_of_tweets)
    
    #Pulling Some attributes from the tweet
    attributes_container = [[tweet.created_at, tweet.favorite_count,tweet.source,  tweet.text] for tweet in tweets]

    #Creation of column list to rename the columns in the dataframe
    columns = ["Date Created", "Number of Likes", "Source of Tweet", "Tweet"]
    
    #Creation of Dataframe
    tweets_df = pd.DataFrame(attributes_container, columns=columns)
except BaseException as e:
    print('Status Failed On,',str(e))

No código acima, criamos o nome do usuário (o @name no Twitter) do qual queremos recuperar os tweets e também o número de tweets. Em seguida, criamos um manipulador de exceção para nos ajudar a detectar erros de maneira mais eficaz.

Depois disso, o api.user_timeline()retorna uma coleção dos tweets mais recentes postados pelo usuário que escolhemos no screen_nameparâmetro e o número de tweets que você deseja recuperar.

Na próxima linha de código, passamos alguns atributos que queremos recuperar de cada tweet e os salvamos em uma lista. Para ver mais atributos que você pode recuperar de um tweet, leia isto .

No último pedaço de código criamos um dataframe e passamos a lista que criamos junto com os nomes da coluna que criamos.

Observe que os nomes das colunas devem estar na sequência de como você os passou para o contêiner de atributos (ou seja, como você passou esses atributos em uma lista quando estava recuperando os atributos do tweet).

Se você seguiu corretamente os passos que descrevi, você deve ter algo assim:

imagem-17

Imagem do autor

Agora que terminamos, vamos ver mais um exemplo antes de passarmos para a implementação do Snscrape.

Como raspar tweets de uma pesquisa de texto

Neste método, estaremos recuperando um tweet com base em uma pesquisa. Você pode fazer assim:

import tweepy

consumer_key = "XXXX" #Your API/Consumer key 
consumer_secret = "XXXX" #Your API/Consumer Secret Key
access_token = "XXXX"    #Your Access token key
access_token_secret = "XXXX" #Your Access token Secret key

#Pass in our twitter API authentication key
auth = tweepy.OAuth1UserHandler(
    consumer_key, consumer_secret,
    access_token, access_token_secret
)

#Instantiate the tweepy API
api = tweepy.API(auth, wait_on_rate_limit=True)


search_query = "sex for grades"
no_of_tweets =150


try:
    #The number of tweets we want to retrieved from the search
    tweets = api.search_tweets(q=search_query, count=no_of_tweets)
    
    #Pulling Some attributes from the tweet
    attributes_container = [[tweet.user.name, tweet.created_at, tweet.favorite_count, tweet.source,  tweet.text] for tweet in tweets]

    #Creation of column list to rename the columns in the dataframe
    columns = ["User", "Date Created", "Number of Likes", "Source of Tweet", "Tweet"]
    
    #Creation of Dataframe
    tweets_df = pd.DataFrame(attributes_container, columns=columns)
except BaseException as e:
    print('Status Failed On,',str(e))

O código acima é semelhante ao código anterior, exceto que alteramos o método da API de api.user_timeline()para api.search_tweets(). Também adicionamos tweet.user.nameà lista de contêineres de atributos.

No código acima, você pode ver que passamos dois atributos. Isso ocorre porque se passarmos apenas tweet.user, ele retornaria apenas um objeto de usuário de dicionário. Portanto, também devemos passar outro atributo que queremos recuperar do objeto de usuário, que é name.

Você pode acessar aqui para ver uma lista de atributos adicionais que podem ser recuperados de um objeto de usuário. Agora você deve ver algo assim depois de executá-lo:

imagem-18

Imagem do Autor.

Tudo bem, isso praticamente encerra a implementação do Tweepy. Apenas lembre-se de que há um limite para o número de tweets que você pode recuperar, e você não pode recuperar tweets com mais de 7 dias usando o Tweepy.

Como usar o Snscrape para raspar tweets

Como mencionei anteriormente, o Snscrape não requer credenciais do Twitter (chave de API) para acessá-lo. Também não há limite para o número de tweets que você pode buscar.

Para este exemplo, porém, apenas recuperaremos os mesmos tweets do exemplo anterior, mas usando o Snscrape.

Para usar o Snscrape, devemos primeiro instalar sua biblioteca em nosso PC. Você pode fazer isso digitando:

pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git

Como raspar tweets de um usuário com Snscrape

O Snscrape inclui dois métodos para obter tweets do Twitter: a interface de linha de comando (CLI) e um Python Wrapper. Apenas tenha em mente que o Python Wrapper não está documentado no momento – mas ainda podemos nos virar com tentativa e erro.

Neste exemplo, usaremos o Python Wrapper porque é mais intuitivo que o método CLI. Mas se você ficar preso a algum código, sempre poderá recorrer à comunidade do GitHub para obter assistência. Os colaboradores terão prazer em ajudá-lo.

Para recuperar tweets de um usuário específico, podemos fazer o seguinte:

import snscrape.modules.twitter as sntwitter
import pandas as pd

# Created a list to append all tweet attributes(data)
attributes_container = []

# Using TwitterSearchScraper to scrape data and append tweets to list
for i,tweet in enumerate(sntwitter.TwitterSearchScraper('from:john').get_items()):
    if i>100:
        break
    attributes_container.append([tweet.date, tweet.likeCount, tweet.sourceLabel, tweet.content])
    
# Creating a dataframe from the tweets list above 
tweets_df = pd.DataFrame(attributes_container, columns=["Date Created", "Number of Likes", "Source of Tweet", "Tweets"])

Vamos revisar alguns dos códigos que você pode não entender à primeira vista:

for i,tweet in enumerate(sntwitter.TwitterSearchScraper('from:john').get_items()):
    if i>100:
        break
    attributes_container.append([tweet.date, tweet.likeCount, tweet.sourceLabel, tweet.content])
    
  
# Creating a dataframe from the tweets list above 
tweets_df = pd.DataFrame(attributes_container, columns=["Date Created", "Number of Likes", "Source of Tweet", "Tweets"])

No código acima, o que o sntwitter.TwitterSearchScaperfaz é retornar um objeto de tweets do nome do usuário que passamos para ele (que é john).

Como mencionei anteriormente, o Snscrape não tem limites no número de tweets, então ele retornará quantos tweets desse usuário. Para ajudar com isso, precisamos adicionar a função enumerate que irá percorrer o objeto e adicionar um contador para que possamos acessar os 100 tweets mais recentes do usuário.

Você pode ver que a sintaxe de atributos que obtemos de cada tweet se parece com a do Tweepy. Esta é a lista de atributos que podemos obter do tweet do Snscrape, com curadoria de Martin Beck.

Sns.Scrape

Crédito: Martin Beck

Mais atributos podem ser adicionados, pois a biblioteca Snscrape ainda está em desenvolvimento. Como por exemplo na imagem acima, sourcefoi substituído por sourceLabel. Se você passar apenas sourceele retornará um objeto.

Se você executar o código acima, deverá ver algo assim também:

imagem-19

Imagem do autor

Agora vamos fazer o mesmo para raspagem por pesquisa.

Como raspar tweets de uma pesquisa de texto com Snscrape

import snscrape.modules.twitter as sntwitter
import pandas as pd

# Creating list to append tweet data to
attributes_container = []

# Using TwitterSearchScraper to scrape data and append tweets to list
for i,tweet in enumerate(sntwitter.TwitterSearchScraper('sex for grades since:2021-07-05 until:2022-07-06').get_items()):
    if i>150:
        break
    attributes_container.append([tweet.user.username, tweet.date, tweet.likeCount, tweet.sourceLabel, tweet.content])
    
# Creating a dataframe to load the list
tweets_df = pd.DataFrame(attributes_container, columns=["User", "Date Created", "Number of Likes", "Source of Tweet", "Tweet"])

Novamente, você pode acessar muitos dados históricos usando o Snscrape (ao contrário do Tweepy, pois sua API padrão não pode exceder 7 dias. A API premium é de 30 dias). Assim, podemos passar a data a partir da qual queremos iniciar a pesquisa e a data em que queremos que ela termine no sntwitter.TwitterSearchScraper()método.

O que fizemos no código anterior é basicamente o que discutimos antes. A única coisa a ter em mente é que até funciona de forma semelhante à função range em Python (ou seja, exclui o último inteiro). Portanto, se você deseja obter tweets de hoje, precisa incluir o dia depois de hoje no parâmetro "até".

imagem-21

Imagem do Autor.

Agora você também sabe como raspar tweets com o Snscrape!

Quando usar cada abordagem

Agora que vimos como cada método funciona, você deve estar se perguntando quando usar qual.

Bem, não existe uma regra universal para quando utilizar cada método. Tudo se resume a uma preferência de assunto e seu caso de uso.

Se você deseja adquirir um número infinito de tweets, deve usar o Snscrape. Mas se você quiser usar recursos extras que o Snscrape não pode fornecer (como geolocalização, por exemplo), você deve definitivamente usar o Tweepy. Ele é integrado diretamente com a API do Twitter e oferece funcionalidade completa.

Mesmo assim, o Snscrape é o método mais comumente usado para raspagem básica.

Conclusão

Neste artigo, aprendemos como extrair dados do Python usando Tweepy e Snscrape. Mas esta foi apenas uma breve visão geral de como cada abordagem funciona. Você pode aprender mais explorando a web para obter informações adicionais.

Incluí alguns recursos úteis que você pode usar se precisar de informações adicionais. Obrigado por ler.

 Fonte: https://www.freecodecamp.org/news/python-web-scraping-tutorial/ 

#python #web