How to load all entries in an infinite scroll at once to parse the HTML in python

40,717

Solution 1

This you won't be able to do with requests and BeautifulSoup as the page that you want to extract the information from loads the rest of the entries through JS when you scroll down. You can do this using selenium which opens a real browser and you can pass page down key press events programmatically. Watch this video to see the action. http://www.youtube.com/watch?v=g54xYVMojos

http://www.tidbitsofprogramming.com/2014/02/crawling-website-that-loads-content.html

Below is the script that extracts all the 100 post titles using selenium.

import time

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

browser = webdriver.Chrome()

browser.get("https://medium.com/top-100/december-2013")
time.sleep(1)

elem = browser.find_element_by_tag_name("body")

no_of_pagedowns = 20

while no_of_pagedowns:
    elem.send_keys(Keys.PAGE_DOWN)
    time.sleep(0.2)
    no_of_pagedowns-=1

post_elems = browser.find_elements_by_class_name("post-item-title")

for post in post_elems:
    print post.text

Output:

When Your Mother Says She’s Fat
When “Life Hacking” Is Really White Privilege
As tendências culturais dos anos 2000 adiantadas pelo É o Tchan na década de 90
Coming Out as Biracial
Como ganhar discussões com seus parentes de direita neste Natal
How to save local bookstores in two easy steps
Welcome to Dinovember
How to Piss Off Your Barista
The boy whose brain could unlock autism
CrossFit’s Dirty Little Secret
Welcome to Medium
Here’s How the Military Wasted Your Money in 2013
Why I Wear Nail Polish
The day of High School I’ll never forget
7 Reasons Buffalonians Shouldn’t Hate Snow
Dear Guy Who Just Made My Burrito:
Is the Mona Lisa Priceless?
Please stop live tweeting people’s private conversations
Your Friends and Rapists
Eight things you can live without
The Value of Content
40 Ways To Make Life Simple Again
Manila-Beijing-Washington:
Things I Wish Someone Had Told Me When I Was Learning How to Code
Dear Ticketmaster,
Steve Jobs Danced To My Song
11 Things I Wish I Knew When I Started My Business
Bullish: Benevolent Sexism and “That Guy” Who Makes Everything Awkward
Advice to a College Music Student
Silver Gyninen joutui sotaan
Imagining the Post-Antibiotics Future
Which side are you on?
Put it away, junior. 
Casual Predation
The sad little iPhone commercial
How Node.js is Going to Replace JavaScript
Why you should have your heart broken into a million little pieces. 
How to Write Emails Like a CEO
Designing Products That Scale
How radioactive poison became the assassin’s weapon of choice
Why do people hate CrossFit?
We (Still) Need Feminism
10 Advanced Hearthstone Arena Tips
Let It Full-Bleed
What Medium Is For
How a Small Force of Finnish Ski Troops Fought Off a Massive Soviet Army
An Introvert’s Guide to Better Presentations
Mandela The Terrorist
Why You Should have a Messy Desk
Why I’m Not a TEDx Speaker
Fonts have feelings too
You Don’t Want Your Thanksgiving to Go Like This
What I’ve Learned in My First Month as a VC
Why Quantity Should be Your Priority
My Airbnb story
I Wanna Date You Like An Animal
The GIF Guide to Getting Paid
How We Discovered the Underground Chinese App Market
First Images of a Heart Injected with Liquid Metal 
Beyonce Broke the Music Business
“View mode” approach to responsive web design
Sometimes You Will Forget Your Mom Has Cancer
Darkness Ray Beams Invisibility From A Distance
Why Work As We Know It May Be Immoral
Staying Ahead of the Curve
The Geekiest Game Ever Made Has Been Released In Germany 
The Dirty Secret Behind the Salesforce $1M Hackathon
I’m a really good impostor
Mathematical Model of Zombie Epidemics Reveals Two Types of Living-Dead Infections
The Heartbreak Kid
200 Things
I’m Not Racist But—
Duel of the Superbattleships
23 and You
The Seattle NO
I’m a vaccine refuser. There, I said it. 
The Year We Broke Everything
How to make a DIY home alarm system with a raspberry pi and a webcam
Strike While the App is Hot
How to Fall In (and Out) of Love:
Why did Google make an ad for promoting “Search” in India where it has over 97% market share?
A Holiday Message From Jesus
Revealed: The Soviet Union’s $1 Billion ‘Psychotronic’ Arms Race with the US
Postmortem of a Venture-backed Startup
The 1.x Crore Myth
The “Getting Shit Done” Sleep Cycle 
Is the F-35 Joint Strike Fighter the New F-4?
Can the F-35 Win a Dogfight?
Responsive Photosets
Fightball: Millennials vs Boomers
The iconicity of “peaceful resistance”
How We Make Chocolate
Five Ships of the Chinese Navy You Really Ought to Know About
Glassholes and Black Rock City
Bad News for U.S. Warplane Pilots: Russia’s New Dogfighting Missile Can’t Miss
How Antisec Died
10 ways you’ll probably f**k up your startup
UPDATED: Finding the unjustly homeless, and teaching them to code.
Technology hasn’t Changed Us.
What I’ve learned from fatherhood 

Solution 2

You can try with this :

from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.support.ui import WebDriverWait

pause = 10
driver = webdriver.PhantomJS(executable_path='phantomjs.exe')
driver.get("your_url")
#This code will scroll down to the end
while True:
     try:
        # Action scroll down
        driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
     break
 except: 
     pass
Share:
40,717
user3093455
Author by

user3093455

Updated on February 14, 2020

Comments

  • user3093455
    user3093455 over 4 years

    I am trying to extract information from this page. The page loads 10 items at a time, and I need to scroll to load all entries (for a total of 100). I am able to parse the HTML and get the information that I need for the first 10 entries, but I want to fully load all entries before parsing the HTML.

    I am using python, requests, and BeautifulSoup. The way I parse the page when it loads with the first 10 entries is as follows:

    from bs4 import BeautifulSoup
    import requests
    s = requests.Session()
    r = s.get('https://medium.com/top-100/december-2013')
    page = BeautifulSoup(r.text)
    

    But this only loads the first 10 entries. So I looked at the page and got the AJAX request used to load the subsequent entries and I get a response but it's in the a funky JSON and I'd rather use the HTML parser instead of parsing JSON. Here's the code:

    from bs4 import BeautifulSoup
    import requests
    import json
    s = requests.Session()
    url = 'https://medium.com/top-100/december-2013/load-more'
    payload = {"count":100}
    r = s.post(url, data=payload)
    page = json.loads(r.text[16:]) #skip some chars that throw json off
    

    This gives me the data but it's in a very long and convoluted JSON, I would much rather load all the data on the page and simply parse the HTML. In addition, the rendered HTML provides more information than the JSON response (i.e. the name of the author instead of obscure userID, etc.) There was a similar question here but no relevant answers. Ideally I want to make the POST call and then request the HTML and parse it, but I haven't been able to do that.

  • user3093455
    user3093455 over 10 years
    Thanks @praveen! That looks like what I am looking for. However it seems that the elem.send_keys(Keys.PAGE_DOWN) isn't triggering the page down as in your video. Could it be my slow Internet connection?
  • user3093455
    user3093455 over 10 years
    EDIT: it seems that for the Keys.PAGE_DOWN to work the window must be in the front. I couldn't find a way to auto-focus the Chrome window, so it seems that I have to manually click on the window when it pops up. Thank you for your help @preveen!
  • praveen
    praveen over 10 years
    yeah! I forgot to tell you that. chrome was unable to focus on the window no matter what I element I tried to focus on. I had to manually click on the window
  • user3496060
    user3496060 about 7 years
    I had to download a chromedriver and specify the path....Browser=webdriver.Chrome(r'chromedriverpath')
  • Verv
    Verv over 6 years
    Don't link someone else's answer to answer a question.
  • Roul
    Roul about 6 years
    How to get the html content which will be parsed by bs4?
  • Luis Miguel
    Luis Miguel about 6 years
    Great answer @praveen! I also check out your YT video and scored it. I was looking for a solution like this to get some data from a similar site.
  • neelmeg
    neelmeg about 3 years
    This works to some extent but does not work for youtube.com search results which has infinite scrolling.
  • Shashwat Swain
    Shashwat Swain about 2 years
    html = BeautifulSoup(driver.page_source)