18.scrape quotes
http://quotes.toscrape.com/
1. Get the names of all the authors on the first page.
import requests
import bs4
base_url = 'http://quotes.toscrape.com/'
result = requests.get('http://quotes.toscrape.com/')
# result.text
soup = bs4.BeautifulSoup(result.text,'lxml')
# soup
soup.select('.author')
for author in soup.select('.author'):
print(author.text)
authors = set()
for author in soup.select('.author'):
authors.add(author.text)
authors
type(authors)
2. Create a list of all the quotes on the first page.
quotes = []
for quote in soup.select('.text'):
quotes.append(quote.text)
quotes
3. Inspect the site and use Beautiful Soup to extract the top ten tags from the requests text shown on the top right from the home page (e.g Love,Inspirational,Life, etc...).
soup.select('.tag-item')
for tag in soup.select('.tag-item'):
print(tag.text)
4. Notice how there is more than one page, and subsequent pages look like this http://quotes.toscrape.com/page/2/. Use what you know about for loops and string concatenation to loop through all the pages and get all the unique authors on the website. Keep in mind there are many ways to achieve this, also note that you will need to somehow figure out how to check that your loop is on the last page with quotes. For debugging purposes, I will let you know that there are only 10 pages, so the last page is http://quotes.toscrape.com/page/10/, but try to create a loop that is robust enough that it wouldn't matter to know the amount of pages beforehand.
base_url = 'http://quotes.toscrape.com/page/'
authors = set()
for i in range (1,11):
scrape_url = base_url + str(i)
result = requests.get(scrape_url)
soup = bs4.BeautifulSoup(result.text,'lxml')
for author in soup.select('.author'):
authors.add(author.text)
authors
scrape_url = base_url + str(999999)
result = requests.get(scrape_url)
soup = bs4.BeautifulSoup(result.text,'lxml')
soup
page_valid = True
page = 1
authors = set()
base_url = 'http://quotes.toscrape.com/page/'
while page_valid:
scrape_url = base_url + str(page)
result = requests.get(scrape_url)
if 'No quotes found!' in result.text:
break
soup = bs4.BeautifulSoup(result.text,'lxml')
for author in soup.select('.author'):
authors.add(author.text)
page +=1
authors