• tweepy_使用tweepy提取和分析与全球变暖相关的推文
    2020-10-08 18:15:38


    Whether the global warming phenomenon is occurring or not has always been at the center of public debate for many years. On one side, it is claimed that the earth’s temperature is rising fast and there will be unprecedented natural disasters in the coming days, and on the other side, it is claimed that global warming is just a hoax and no such events will occur. Amidst this, it is interesting to see what the majority of people are talking about global warming in social media and on which side of the debate most of them belong to.

    多年来,是否一直在发生全球变暖现象一直是公众争论的焦点。 一方面,有人声称地球的温度正在Swift升高,未来几天将发生前所未有的自然灾害,另一方面,有人声称全球变暖只是一个骗局,不会发生任何此类事件。 在这之中,有趣的是看到大多数人在社交媒体上谈论全球变暖,以及大多数人在辩论的哪个方面。

    Social media has provided platforms, where users can provide their feedback about any issue. The data posted on these platforms are raw and direct opinions of the people, which can provide invaluable insights about the people’s perception of the situation. Among social media platforms, Twitter holds a large mine of data. All the user's data posted on the site is public and can be accessed using Twitter API. Additionally, Twitter has a hashtag culture which has made it easier to sort and collect data related to a specific topic of interest.

    社交媒体提供了平台,用户可以在其中提供有关任何问题的反馈。 这些平台上发布的数据是人们的原始直接意见,可以提供有关人们对情况的感知的宝贵见解。 在社交媒体平台中,Twitter拥有大量数据。 网站上发布的所有用户数据都是公开的,可以使用Twitter API进行访问。 此外,Twitter具有主题标签文化,这使得分类和收集与特定主题相关的数据变得更加容易。

    Utilizing the data available on Twitter, I have analyzed tweets related to global warming which are posted on the site using the hashtag #globalwarming. The tweets collected from twitter are personal posts of the platform users. In my opinion, it is unethical to disseminate any personal information of the users without direct consent from them. Hence, only the high-level findings of the analysis have been presented in this post.

    利用Twitter上的可用数据,我分析了与全球变暖相关的推文,这些推文使用#globalwarming标签发布在网站上。 从Twitter收集的推文是平台用户的个人帖子。 我认为,未经用户直接同意而散布用户的任何个人信息是不道德的。 因此,本文仅介绍分析的高级发现。

    The first step for the analysis is collecting relevant tweets from Twitter API. Following are the steps to access data from Twitter:

    分析的第一步是从Twitter API收集相关的推文。 以下是从Twitter访问数据的步骤:

    1. Create a developer account to access Twitter API using twitter. After the developer account has been created, an app should be created. This enables the access of “Keys and Access tokens” for the project. These tokens should be saved to be used in the later stage.

      创建一个开发人员帐户以使用twitter访问Twitter API。 创建开发者帐户后,应创建一个应用程序。 这样就可以访问项目的“密钥和访问令牌”。 这些令牌应保存以备后用。

    2. The next step is to install the Tweepy library in python. This can be accomplished by running “pip install tweepy” in the command line.

      下一步是在python中安装Tweepy库。 这可以通过在命令行中运行“ pip install tweepy ”来完成。

    3. After the Tweepy library is successfully installed, open the python notebook, and start programming to collect tweets.

    • First import tweepy library into python


      First import tweepy library into pythonimport tweepy

      首先将tweepy库导入python import tweepy

    • Save the keys and tokens obtained earlier in the respective variables.

    consumer_key = "*****************"
    consumer_secret = "*************************"
    access_token = "****************************"
    access_token_secret = "************************"
    • Create an API object to access twitter API

      创建一个API对象以访问twitter API
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth,wait_on_rate_limit=True)
    • Collect tweets and save it in a csv file for future use. For simplicity, only 11,888 tweets were collected and analyzed.

      收集推文并将其保存在csv文件中,以备将来使用。 为简单起见,仅收集和分析了11888条推文。
    file = open('globalwarming.csv', 'w',encoding='utf-8')
    csvWriter = csv.writer(file)
    for tweet in tweepy.Cursor(api.search,q="#globalwarming",lang="en", count = 20000, since="2020-01-01").items():
    csvWriter.writerow([tweet.created_at, tweet.text])

    After collecting tweets, one can read the tweets from csv file and then start analyzing them. Since the tweets are text and not very clean, it should be cleaned to get good result. Following are the steps undertaken in analyzing tweets:

    收集推文后,可以从csv文件读取推文,然后开始分析它们。 由于这些推文是文本,而且不是很干净,因此应该对其进行清理以获得良好的效果。 以下是分析推文的步骤:

    • Firstly, the tweets saved in the CSV file were read in IDE using the pandas library. The dataframe with two columns was created, namely ‘Time’ denoting the time when Tweet was posted and ‘Tweets’ denoting the text in tweets itself.

      首先,使用pandas库在IDE中读取保存在CSV文件中的推文。 创建了具有两列的数据框,即“时间”表示发布Tweet的时间,“ Tweets”表示表示Tweet本身的文本。
    data = pd.read_csv("globalwarming.csv", header = None, encoding='utf-8', names = ['Time', 'Tweets'])
    • While observing the text in tweets, it was found that most of the tweets were not in plain text. There were symbols and links which highly reduced the readability of the text.

      在观察推文中的文本时,发现大多数推文都不是纯文本。 有些符号和链接大大降低了文本的可读性。
    • Therefore, the next step was to clean the text. At first, the hashtags related to global warming itself were removed from the tweets as it will not provide any additional information about the content in tweets. The Twitter usernames which were tagged in the tweets were also removed as their contribution to the sentiment of the tweet was assumed to be less significant due to the limited scope and size of the analysis. There were several hyperlinks in tweets that were also removed to simplify the analysis. Finally, all the punctuation symbols were removed, and then the text was converted to lower case.

      因此,下一步是清除文本。 首先,与全球变暖相关的主题标签已从推文中删除,因为它不会提供有关推文内容的任何其他信息。 推文中标记的Twitter用户名也被删除,因为由于分析的范围和规模有限,它们对推文情感的贡献不那么重要。 推文中有几个超链接也已删除,以简化分析。 最后,删除所有标点符号,然后将文本转换为小写。
    #removing hashtags related to globalwarming
    def rem_hashtags(text):
    processed_text = re.sub(r"#globalwarming", "", text)
    processed_text = " ".join(processed_text.split())
    return processed_text
    data['Tweets'] = data['Tweets'].apply(lambda x:rem_hashtags(x))#removing tagged users from the tweets
    def remove_users(text):
    processed_text = re.sub(r'@\w+ ?',"",text)
    processed_text = " ".join(processed_text.split())
    return processed_text
    data['Tweets'] = data['Tweets'].apply(lambda x:remove_users(x))#removing hyperlinks mentioned in the tweets
    def remove_links(text):
    processed_text = re.sub(r"(?:\@|http?\://|https?\://|www)\S+", "", text)
    processed_text = " ".join(processed_text.split())
    return processed_text
    data['Tweets'] = data['Tweets'].apply(lambda x:remove_links(x))#removing punctuations in the tweets
    def remove_punct(text):
    punctuations = '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'
    text = "".join([char for char in text if char not in punctuations])
    text = re.sub('[0-9]+', '', text)
    return text
    data['Tweets'] = data['Tweets'].apply(lambda x: remove_punct(x))#making all tweets lowercase
    def lowercase_word(text):
    text = "".join([char.lower() for char in text])
    return text
    data['Tweets'] = data['Tweets'].apply(lambda x: lowercase_word(x))

    制作WordCloud (Making WordClouds)

    After cleaning the text, a word cloud was generated to visualize some of the highly repeated words in the tweets. Most frequent words often give an idea about the topics that people are more interested or concerned about.

    清理文本后,将生成一个词云,以可视化推文中一些高度重复的词。 最常用的单词通常可以让人们对人们更感兴趣或关注的主题有所了解。

    tweet_All = " ".join(tweet for tweet in data['Tweets'])fig, ax = plt.subplots(1, 1, figsize  = (30,30))
    # Create and generate a word cloud image:
    wordcloud_ALL = WordCloud(max_font_size=50, max_words=100, background_color="white").generate(tweet_All)
    # Display the generated image:
    ax.imshow(wordcloud_ALL, interpolation='bilinear')

    From the wordcloud, it can be seen that the majority of the people were tweeting/talking mostly about “climate change”. The name of one of the professors (‘Richard’) also frequently came in the tweets. The words like “phd exposes”, “phd” , “scientist”, “professor” indicates that there were some talks about the findings from the research related to global warming. There were also talks about “fire” and “wildfire” capturing the recent events ongoing in the US which some people considers as the aftereffect of global warming. Some words like “face lie”, “bald face”, “bestlie” shows some people’s strong opinion against global warming phenomena.

    从词云上可以看出,大多数人都在发推文/谈论主要是关于“气候变化”的。 推文中也经常出现一位教授的名字(“理查德”)。 诸如“博士暴露”,“博士”,“科学家”,“教授”之类的词表示存在一些有关全球变暖研究成果的讨论。 也有关于“大火”和“野火”的讨论,它们记录了美国最近发生的事件,有人认为这是全球变暖的后遗症。 诸如“说谎的脸”,“秃顶的脸”,“最佳的表情”之类的词表明了人们对全球变暖现象的强烈看法。

    情绪分析 (Sentiment Analysis)

    After knowing the most discussed topics in the tweets related to global warming, it is interesting to know how these tweets are polarized. Therefore, sentiment analysis on tweets was done using the TextBlob library. TextBlob library provides a simple API for doing Natural Language Processing tasks including sentiment analysis of the texts.

    在了解与全球变暖有关的推文中讨论最多的主题之后,很有趣的是知道这些推文是如何极化的。 因此,使用TextBlob库对推文进行了情感分析。 TextBlob库提供了一个简单的API,用于执行自然语言处理任务,包括文本的情感分析。

    def get_tweet_sentiment(data): 
    if data > 0:
    return 'positive'
    elif data == 0:
    return 'neutral'
    return 'negative'tweets = [TextBlob(tweet) for tweet in data['Tweets']]
    data['polarity'] = [b.sentiment.polarity for b in tweets]
    data['subjectivity'] = [b.sentiment.subjectivity for b in tweets]
    data['sentiment'] = data['polarity'].apply(get_tweet_sentiment)data['sentiment'].value_counts()

    After performing sentiment analysis, it was found that 63.4% of the tweets were neutral, 22.8% were positive and 13.7% tweets were negative. Among the collected tweets, the majority were not expressing the polarized views.

    进行情绪分析后,发现63.4%的推文是中性的,22.8%是积极的,而13.7%的推文是负面的。 在收集的推文中,大多数未发表两极分化的看法。

    道德困境和局限性 (Ethical Dilemma and Limitations)

    The simple analysis done with the tweets associated with the hashtag #globalwarming gave us a tentative idea about the opinion of people regarding global warming on Twitter. However, the analysis was done with only a limited number of tweets. Therefore, it may not represent the opinion of a larger set of populations. A large number of tweets can be collected to make the study more robust but there is always an ethical concern related to it. The twitter user’s consent for data collection is only based upon the terms and conditions that they signed with the platform which most users may not read line by line. Therefore, it is difficult to say if users are even aware that their data are being collected and analyzed. When a lot of data is collected, it is also not possible to contact each user for consent.

    通过对与#globalwarming标签相关的推文进行的简单分析,使我们对人们关于Twitter上全球变暖的看法有了一个初步的想法。 但是,仅使用有限数量的推文进行了分析。 因此,它可能并不代表更多人群的意见。 可以收集大量推文,以使研究更加可靠,但是始终存在与伦理有关的问题。 Twitter用户同意收集数据仅基于他们与平台签署的条款和条件,大多数用户可能不会逐行阅读。 因此,很难说用户是否意识到自己的数据正在被收集和分析。 当收集了大量数据时,也无法联系每个用户征求同意。

    Additionally, using only one hashtag(i.e., #globalwarming) may not retrieve all data related to global warming. There is a possibility that opinions related to global warming were posted using a different hashtag or even without using one. In such cases, the sample data collected from twitter is biased and will not provide accurate findings. Only the tweets posted in the English language were collected which indicates that the findings only represent the subsection of the population who use the English language as a medium to tweet.

    另外,仅使用一个主题标签(即#globalwarming)可能不会检索与全球变暖有关的所有数据。 与全球变暖有关的观点可能使用不同的主题标签甚至不使用主题标签发布。 在这种情况下,从推特收集的样本数据是有偏见的,不会提供准确的发现。 仅收集了以英语发布的推文,这表明调查结果仅代表使用英语作为推文媒介的人群的一部分。

    翻译自: https://medium.com/social-media-theories-ethics-and-analytics/extracting-and-analyzing-tweets-related-to-global-warming-using-tweepy-395a3b9dbd27


  • Tweepy中文文档.pdf

    2021-06-16 22:28:13
    python Tweepy中文库
  • Tweepy:适用于Python的Twitter! 安装 从PyPI安装最新版本的最简单方法是使用pip: pip install tweepy 您还可以使用Git从GitHub克隆存储库以安装最新的开发版本: git clone ...
  • tweepy-3.5.0.tar.gzlinux安装包 tweepy-3.5.0.tar.gzlinux安装包
  • twitter的api v2出来已经快1年时间了,tweepy作为最受欢迎的推特爬虫模块也在前不久新出了tweepy 4.0版本,4.0版本除了兼容推特api v1以外,也支持api v2。 tweepy 4.0的官方文档: ...

    twitter的api v2出来已经快1年时间了,tweepy作为最受欢迎的推特爬虫模块也在前不久新出了tweepy 4.0版本,4.0版本除了兼容推特api v1以外,也支持api v2。

    tweepy 4.0的官方文档:

    现在来看下怎么安装tweepy v2吧:

    目前使用 pip install tweepy命令还是默认安装tweepy 3.10版本,所以要使用pip install git+https://github.com/tweepy/tweepy.git进行安装,如图,这样就安装上了tweepy 4.0


    如何使用tweepy 4 调用twitter api v2请看:

    tweepy4 搜索历史推文

  • Twitter数据收集器 此python文件用作命令行脚本,该脚本将允许用户将用户最近的100条推文存储到NDJSON文件中。它还可以搜索具有指定主题标签的前100条tweet,并显示关联的独特主题标签以及每个独特主题标签的计数。...
  • Tweepy是什么,我们知道,想要获取的twitter的数据,直接爬虫是比较麻烦的,很多数据都是js渲染出来的,所以尽量走阳关大道,那就是使用twitter官方api,我已申请,不多赘述如何申请。今天要介绍的Tweepy就是基于...


    Twitter Tweepy相关这块内容在CSDN甚至搜索引擎里都少得可怜


    我们知道,想要获取的twitter的数据,直接爬虫是比较麻烦的,很多数据都是js渲染出来的,所以尽量走阳关大道,那就是使用twitter官方api,我已申请,不多赘述如何申请。今天要介绍的Tweepy就是基于twitter api再次封装的一个库, 在OAuth 认证上比官方api好用很多,代码也相当简练,结合pandas、echarts等等一些库,可以做很多有趣的实现。


    安装Tweepy:pip install tweepy


    import tweepy
    # API认证
    def ApiAuthentic():
        consumer_key = "你的key"   
        consumer_secret = "你的consumer_secret"   
        access_token = "你的token"   
        access_token_secret = "你的token_secret"  
        auth = tweepy.OAuthHandler(consumer_key, consumer_secret)   
        auth.set_access_token(access_token, access_token_secret)  
        api = tweepy.API(auth)
        # redirect_url = auth.get_authorization_url()
        # print(api.me().screen_name + '认证成功')
        return api   
    # 导入监控列表
    def ImportMonitor(opendir):
        result = []
        with open(opendir, 'r') as file_to_read:
            while True:
                line = file_to_read.readline()
                if not line:
                line = line.strip('\n')
        return result
    # 查询数据
    def QueryFans(api,userlist):
        n = 1
        for i in userlist:
                user = api.get_user(i)
                with open(r'data.txt','a+') as f:
    if __name__ == '__main__':
        userlist = ImportMonitor(r'monitor_list.txt')
        api = ApiAuthentic()
        print('------'+api.me().screen_name + '认证成功------')

    实现的功能是从记事本里取id然后通过api把数据保存到data.txt,打开txt如下图,分别是 用户名,id,粉丝数,所以,悟了吗?



    import tweepy
    consumer_key = "你的key"   
    consumer_secret = "你的consumer_secret"   
    access_token = "你的token"   
    access_token_secret = "你的token_secret"  
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)
    public_tweets = api.home_timeline()
    for tweet in public_tweets:






    API 类提供对整个 twitter restapi 方法的访问,每个方法都可以接受各种参数并返回响应。tweepy 同时也支持长链接的形式得到即时信息,没错就是 stream!tweepy API 的返回值一般都是对象,不同的对象取值字段不同,具体需要具体分析

    • 推文相关—— Status 对象
    • 用户相关—— User 对象
    • 好友相关—— FriendShip 对象
    • 搜索记录相关—— SavedSearch 对象

    好了,Just Do it!





    均是 0 积分,免费下载

    ☆ ☆ ☆

  • tweepy安装与权限获取

    2022-03-04 18:33:39
    env/bin/activate # Linux & macOS 虚拟环境激活后,安装tweep pip install tweepy 2.2 运行第一个tweepy 安装好tweepy后,用虚拟环境打开一个python文件 测试代码: import tweepy print("----1") # Authenticate ...

    1 权限获取

    1.1 注册tweeter API 账户



    • 如果是非商业的学术用途,最好一开始绑定的联系邮箱就是学校邮箱,可以增加后续申请成功率,不需要用学校邮箱注册,只要将tweeter的Account Information里邮箱改为学校即可

    注册完tweeter API账户,就可以凭此登入https://developer.twitter.com/en/portal/dashboard
    你将需要创建一个project,APP key和token在这个网站内"Projets & Apps"随时可查

    1.2 申请 Elevated 功能以实现tweepy

    在网站developer.tweeter.portal中,Products\Tweeter API v2 下有Elevated 功能,

    • The core use case, intent, or business purpose for your use of the Twitter APIs.
    • If you intend to analyze Tweets, Twitter users, or their content, share details about the analyses you plan to conduct, and the methods or techniques.
    • If your use involves Tweeting, Retweeting, or liking content, share how you’ll interact with Twitter accounts, or their content.
    • If you’ll display Twitter content off of Twitter, explain how, and where, Tweets and Twitter content will be displayed with your product or service, including whether Tweets and Twitter content will be displayed at row level, or aggregated.
    • 是否用于政府blabla填否



    • 可以直接用模板再回复一次,注意邮件内不要包括它提出的任何问题
    • 可以选择在美国时间上午回复,受审会很快
    • 收到"Elevated Access Approved"邮件为成功

    2 tweepy安装与运行

    2.1 在虚拟环境安装tweepy

    Step1 创建虚拟环境

    mkdir project_name
    cd project_name

    Step2 安装与创建虚拟环境

    • 安装虚拟环境module(Python 3 不需要这一步)
    pip install virtualenv #Windows Python 2
    sudo pip install virtualenv  # Linux & macOS
    • 建立虚拟环境

    Python 3:

    python -m venv environment_name # Windows
    python3 -m venv env  # Linux & macOS

    Python 2:

    virtualenv environment_name

    这会创建一个名为’environment_name’ (或env)的文件夹在 project_name 文件中.

    Step3 激活虚拟环境和安装tweepy

    environment_name\Scripts\activate  # Windows
    . env/bin/activate  # Linux & macOS


    pip install tweepy

    2.2 运行第一个tweepy


    import tweepy
    # Authenticate to Twitter
    auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) 
    auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
    # Create API object
    api = tweepy.API(auth)

    Key 和token在https://developer.twitter.com/en/portal/dashboard中可以找到和生成


  • 基于Tweepy的Twitter提取和分析工具 专为UniMe实习和毕业论文设计(代码:TEA) 重要提示:在运行任何工具之前,获取您的Twitter API密钥并将其写入授权文件中(例如auth_creds.ini )。 适用于Linux(任何发行版...
  • tweepy-twitter-bot1

    2021-03-16 21:00:32
    绊倒Tweepy Twitter Bot。 目录 关于 推文是计划的,并根据秒数动态轮换。 入门 安装并使用它。 先决条件 使用了Python 3.x,Python 3.9.1。 码头工人.env # MongoDB MONGO_URL=mongodb://mongodb:27017 MONGO_...
  • tweepy库官方文档.pdf

    2021-06-16 22:08:44
    docs-tweepy-org-en-stable.pdf for python
  • 使用WeatherAPI和Tweepy用Python制作的本地天气状况推文脚本! 访问Twitter上的@WeatherwareBot,以获取由out bot生成的马萨诸塞州洛厄尔的每日天气更新! 与我的好朋友合作开发 config.py存储生成此类推文所需的...
  • Mastodon API的Python包装器,例如tweepy 安装 $ pip install pawopy import pawopy auth = pawopy . OAuthHandler ( 'https://pawoo.net' ) auth . set_access_token ( access_token ) api = pawopy . API ( auth ...
  • 资源分类:Python库 所属语言:Python 资源全名:tweepy-3.0.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
  • Tweepy比赛总结 比赛摘要-鸣叫机器人。 一个简短的python程序会在网络上抓取有关足球比赛的信息,然后下载图像并使用收集到的数据创建推文。 该程序正常工作所需的3个链接是: FootballCritic,用于诸如球队名称,...
  • 资源分类:Python库 所属语言:Python 资源全名:tweepy-parrot-0.0.1.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
  • python 微信botby Lucas Kohorst 卢卡斯·... 使用Tweepy在Python中创建Twitter Bot (Create a Twitter Bot in Python Using Tweepy) With about 15% of Twitter being composed of bots, I wanted to try my h...
  • Tweepy1_抓取Twitter数据

    千次阅读 2020-12-20 05:36:26
    之前一直想用爬虫登陆并抓取twitter数据,试过scrapy,requests等包,都没成功,可能是我还不太熟悉的原因,不过今天发现了一个新包tweepy,专门用于在Python中处理twitter API。先尝试一下教程的第一个例子,经过了...
  • tweepy怎么用看这里: 使用tweepy4 搜索历史推文 文档:https://docs.tweepy.org/en/stable/client.html#tweets 长话短说,怎么获取某个推文的点赞列表呢,就是想知道哪些用户给这个推文点赞了。 代码如下: import...
  • 使用Python中的Tweepy软件包获取推文的各种代码 档案结构: . ├── README.md <-- This is instruction file ├── get_tweets_for_username <-- Jupyter Notebook to get the latest tweets of the ...
  • 资源来自pypi官网,解压后可用。 资源全名:tweepy-1.7.1-py2.6.egg
  • tweepy4.0的安装请看: https://blog.csdn.net/li123_123_/article/details/120473026 怎么使用tweepy4 来调用twitter api v2接口搜索推文呢: 文档:...
  • 在import tweepy from time import sleep folderpath = "/path/to/your/directory/" def tweepy_creds(): consumer_key = 'Removed' consumer_secret = 'Removed' access_token = 'Removed' access_token_secret = '...
  • 问题遇到的现象和发生背景 第一次是按照... 我想要达到的结果 我用tweepy好几天了,一个东西都没做出来来,我想赶紧用tweepy发出我的第一个帖子,查找我的Media Studio里文件的media_ids。希望大家能帮帮我。
  • Tweepy安装 (Tweepy Installation) Tweepy is in PyPy, so you can use pip to install it: Tweepy在PyPy中,因此您可以使用pip进行安装: pip install tweepy An alternative way to do it is to install it from ...
  • 该项目围绕Tweepy库构建,旨在收集包含所提供搜索词之一的任何tweet的原始json数据。 api_config.py文件可用于快速生成任何基于tweepy的项目的API对象。 其中包含一个辅助脚本: process_tweets.py ,它将从原始的...
  • import tweepy CONSUMER_KEY = 'key' CONSUMER_SECRET = 'secret' OAUTH_TOKEN = 'key' OAUTH_TOKEN_SECRET = 'secret' auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(OAUTH_...
  • Tweepy-API和新闻-API项目
  • /usr/bin/python3 # coding=utf-8 import tweepy SEP = ';' csv = open('OutputStreaming.csv','a') csv.write('Date' + SEP + 'Text' + SEP + 'Location' + SEP + 'Number_Follower' + SEP + 'User_Name' + SEP + '...



1 2 3 4 5 ... 20
收藏数 474
精华内容 189


友情链接: BPDLX.zip