Skip to content

How to extract data using Selenium without using Facebook Graph API

This tutorial explains how to extract data using Selenium and Python without the Facebook Graph API. The reason why we use Selenium instead of Facebook Graph API is that Facebook could possibly modify or disable any endpoint accesses to the API at any time. One reason is the Cambridge Analytica fiasco, abusing their gains on the Facebook platform.

In the meantime, I’m just thinking when consulting one of the clients about the sentiment analysis project to make this post. Basically what they want to do is get online text data like social media, blog or forums. So what I’m doing is trying to use Selenium to build a sample automation tool. It can act like a human. Similar to the Facebook Graph API but some of the endpoints including permissions and public data, seem to have been disabled. If we rely solely upon this technique when they cut off the API, it is very difficult to create a perfect web scraper tool.

Whats Selenium again? Selenium is basically a tool for automating your browser, allowing you to monitor and use your browser as if it were being used by a human. It’s kind of like imitating a user’s actions with Selenium. What I have done is instruct my script to go to > Login > fan page post. To run this program, you can use the Jupyter Notebook. Don’t forget to download the chromedriver here and place it into the respective project directory.

import time
import re
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

usr = "<your_facebook_email_address>"
pwd = "<your_facebook_password>"

url = ";id=157851205951"
driver = webdriver.Chrome('/Users/zero/Documents/GitHub/SentimentAnalysis/chromedriver')

if driver.find_element_by_xpath('//*[@id="viewport"] /div/div[3] /div/div[2] 
   driver.find_element_by_xpath('//*[@id="viewport"] /div/div[3] /div/div[2] 


elem = driver.find_element_by_id("m_login_email")

elem = driver.find_element_by_id("m_login_password")


hasLoadMore = True
while hasLoadMore:
        if driver.find_element_by_xpath('//*[@id="viewport"] /div/div[4] /div/div/div/div/div/div[2] /div/div/div[5] /*[@class="async_elem"] 
            driver.find_element_by_xpath('//*[@id="viewport"] /div/div[4] /div/div/div/div/div/div[2] /div/div/div[5] /*[@class="async_elem"] 

        hasLoadMore = False

users_list = [] 

users = driver.find_elements_by_class_name('_2b05')

for user in users:
i = 0
texts_list = [] 

texts = driver.find_elements_by_class_name('_2b06')

for txt in texts:

    i += 1
    comments_count = len(users_list)

for i in range(1, comments_count):
    user = users_list[i] 

    text = texts_list[i] 

    print("User ",user)
    print("Text ",text)


Selenium + Python on Facebook
Selenium + Python on Facebook

The above is not a complete code. You may try trigger View more comments… button to load more comments until viewing all.

Selenium + Python on Facebook
Selenium + Python on Facebook

Also, you can save all the data into MongoDB for future analysis.

I’m not responsible if there any legal action or damage occurs for this. It’s just for education purposes.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *