Scraping wunderground without API, using pythonHow do I check whether a file exists without exceptions?Calling an external command in PythonWhat are metaclasses in Python?Finding the index of an item given a list containing it in PythonWhat is the difference between Python's list methods append and extend?How can I safely create a nested directory in Python?Does Python have a ternary conditional operator?How to get the current time in PythonHow can I make a time delay in Python?Does Python have a string 'contains' substring method?

What's the metal clinking sound at the end of credits in Avengers: Endgame?

Do generators produce a fixed load?

Confusion about capacitors

Counterexample: a pair of linearly ordered sets that are isomorphic to subsets of the other, but not isomorphic between them

Feels like I am getting dragged in office politics

Cannot populate data in lightning data table

In gnome-terminal only 2 out of 3 zoom keys work

gnu parallel how to use with ffmpeg

Can fracking help reduce CO2?

A non-technological, repeating, visible object in the sky, holding its position in the sky for hours

What does YCWCYODFTRFDTY mean?

Past Perfect Tense

How to determine the actual or "true" resolution of a digital photograph?

What's the polite way to say "I need to urinate"?

Why does Bran Stark feel that Jon Snow "needs to know" about his lineage?

Please, smoke with good manners

Has any spacecraft ever had the ability to directly communicate with civilian air traffic control?

Upright [...] in italics quotation

Toggle Overlays shortcut?

Is creating your own "experiment" considered cheating during a physics exam?

Packing rectangles: Does rotation ever help?

Binary Numbers Magic Trick

Why the difference in metal between 銀行 and お金?

Transfer over $10k



Scraping wunderground without API, using python


How do I check whether a file exists without exceptions?Calling an external command in PythonWhat are metaclasses in Python?Finding the index of an item given a list containing it in PythonWhat is the difference between Python's list methods append and extend?How can I safely create a nested directory in Python?Does Python have a ternary conditional operator?How to get the current time in PythonHow can I make a time delay in Python?Does Python have a string 'contains' substring method?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I'm not very experienced in the world of scraping data, so the problem here may be obvious to some.



What I want is to scrape historical daily weather data from wunderground.com, without paying the API. Maybe it's not possible at all.



My method is simply to use requests.get and save the whole text into a file (code below).



Instead of getting the tables that can be accessed from the web browser (see image below), the result is a file that has almost everything but those tables. Something like this:



Summary

No data recorded
Daily Observations

No Data Recorded



What is weird is that if I save-as the web page with Firefox, the result depends on whether I choose 'web-page, only HTML' or 'web-page, complete': the latter includes the data I'm interested in, the former does not.



Is it possible that this is on purpose so nobody scrapes their data? I just wanted to make sure there is not a workaround this problem.



Thanks in advance,
Juan



Note: I tried using the user-agent field to no avail.



# Note: I run > set PYTHONIOENCODING=utf-8 before executing python
import requests

# URL with wunderground weather information for a specific date:
date = '2019-03-12'
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/' + date
r = requests.get(url)

# Write a file to check if the tables ar being retrieved:
with open('test.html', 'wb') as testfile:
testfile.write(r.text.encode('utf-8'))


Screenshot of the tables I want to scrape.




UPDATE: FOUND A SOLUTION



Thanks to pointing me to the selenium module, it is the exact solution I needed. The code extracts all the tables present on the URL of a given date (as seen when visiting the site normally). It needs modifications in order to be able to scrape over a list of dates and organize the CSV files created.



Note: geckodriver.exe is needed in the working directory.



from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
import requests, sys, re

# URL with wunderground weather information
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-3-12'

# Commands related to the webdriver (not sure what they do, but I can guess):
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
br = webdriver.Firefox(firefox_binary=bi)

# This starts an instance of Firefox at the specified URL:
br.get(url)

# I understand that at this point the data is in html format and can be
# extracted with BeautifulSoup:
sopa = BeautifulSoup(br.page_source, 'lxml')

# Close the firefox instance started before:
br.quit()

# I'm only interested in the tables contained on the page:
tablas = sopa.find_all('table')

# Write all the tables into csv files:
for i in range(len(tablas)):
out_file = open('wunderground' + str(i + 1) + '.csv', 'w')
tabla = tablas[i]

# ---- Write the table header: ----
table_head = tabla.findAll('th')
output_head = []
for head in table_head:
output_head.append(head.text.strip())

# Some cleaning and formatting of the text before writing:
encabezado = '"' + '";"'.join(output_head) + '"'
encabezado = re.sub('s', '', encabezado) + 'n'
out_file.write(encabezado.encode(encoding='UTF-8'))

# ---- Write the rows: ----
output_rows = []
filas = tabla.findAll('tr')
for j in range(1, len(filas)):
table_row = filas[j]
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text.strip())

# Some cleaning and formatting of the text before writing:
fila = '"' + '";"'.join(output_row) + '"'
fila = re.sub('s', '', fila) + 'n'
out_file.write(fila.encode(encoding='UTF-8'))

out_file.close()



Extra: the answer of @QHarr works beautifully, but I needed a couple of modifications to use it, because I use firefox in my PC. It's important to note that for this to work I had to add the geckodriver.exe file into my working directory. Here's the code:



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
driver = webdriver.Firefox(firefox_binary=bi)
# driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))









share|improve this question



















  • 2





    This is a common scraping issue - most modern webpages are heavily reliant on javascript, which requires a VM to execute inside of. When you use requests, or curl, all you get is the raw html, without any of the functionality that the javascript provides. A good workaround for scraping is to use the selenium library, which gives you that javascript VM. It's a steep learning curve, but well worth it.

    – Danielle M.
    Mar 22 at 19:19






  • 1





    Why don't you want to use the API? Аctually scraping via API much more easy and reliable.

    – omegastripes
    Mar 22 at 19:27











  • @omegastripes : money

    – Juan
    Mar 22 at 19:30











  • If you really want to scrape a webpage with JavaScript, you might want to use Selenium or something that can run an actual headless browser.

    – Random Davis
    Mar 22 at 19:32






  • 1





    @Juan That is quite simple stuff, take a look at this

    – omegastripes
    Mar 25 at 21:23

















1















I'm not very experienced in the world of scraping data, so the problem here may be obvious to some.



What I want is to scrape historical daily weather data from wunderground.com, without paying the API. Maybe it's not possible at all.



My method is simply to use requests.get and save the whole text into a file (code below).



Instead of getting the tables that can be accessed from the web browser (see image below), the result is a file that has almost everything but those tables. Something like this:



Summary

No data recorded
Daily Observations

No Data Recorded



What is weird is that if I save-as the web page with Firefox, the result depends on whether I choose 'web-page, only HTML' or 'web-page, complete': the latter includes the data I'm interested in, the former does not.



Is it possible that this is on purpose so nobody scrapes their data? I just wanted to make sure there is not a workaround this problem.



Thanks in advance,
Juan



Note: I tried using the user-agent field to no avail.



# Note: I run > set PYTHONIOENCODING=utf-8 before executing python
import requests

# URL with wunderground weather information for a specific date:
date = '2019-03-12'
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/' + date
r = requests.get(url)

# Write a file to check if the tables ar being retrieved:
with open('test.html', 'wb') as testfile:
testfile.write(r.text.encode('utf-8'))


Screenshot of the tables I want to scrape.




UPDATE: FOUND A SOLUTION



Thanks to pointing me to the selenium module, it is the exact solution I needed. The code extracts all the tables present on the URL of a given date (as seen when visiting the site normally). It needs modifications in order to be able to scrape over a list of dates and organize the CSV files created.



Note: geckodriver.exe is needed in the working directory.



from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
import requests, sys, re

# URL with wunderground weather information
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-3-12'

# Commands related to the webdriver (not sure what they do, but I can guess):
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
br = webdriver.Firefox(firefox_binary=bi)

# This starts an instance of Firefox at the specified URL:
br.get(url)

# I understand that at this point the data is in html format and can be
# extracted with BeautifulSoup:
sopa = BeautifulSoup(br.page_source, 'lxml')

# Close the firefox instance started before:
br.quit()

# I'm only interested in the tables contained on the page:
tablas = sopa.find_all('table')

# Write all the tables into csv files:
for i in range(len(tablas)):
out_file = open('wunderground' + str(i + 1) + '.csv', 'w')
tabla = tablas[i]

# ---- Write the table header: ----
table_head = tabla.findAll('th')
output_head = []
for head in table_head:
output_head.append(head.text.strip())

# Some cleaning and formatting of the text before writing:
encabezado = '"' + '";"'.join(output_head) + '"'
encabezado = re.sub('s', '', encabezado) + 'n'
out_file.write(encabezado.encode(encoding='UTF-8'))

# ---- Write the rows: ----
output_rows = []
filas = tabla.findAll('tr')
for j in range(1, len(filas)):
table_row = filas[j]
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text.strip())

# Some cleaning and formatting of the text before writing:
fila = '"' + '";"'.join(output_row) + '"'
fila = re.sub('s', '', fila) + 'n'
out_file.write(fila.encode(encoding='UTF-8'))

out_file.close()



Extra: the answer of @QHarr works beautifully, but I needed a couple of modifications to use it, because I use firefox in my PC. It's important to note that for this to work I had to add the geckodriver.exe file into my working directory. Here's the code:



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
driver = webdriver.Firefox(firefox_binary=bi)
# driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))









share|improve this question



















  • 2





    This is a common scraping issue - most modern webpages are heavily reliant on javascript, which requires a VM to execute inside of. When you use requests, or curl, all you get is the raw html, without any of the functionality that the javascript provides. A good workaround for scraping is to use the selenium library, which gives you that javascript VM. It's a steep learning curve, but well worth it.

    – Danielle M.
    Mar 22 at 19:19






  • 1





    Why don't you want to use the API? Аctually scraping via API much more easy and reliable.

    – omegastripes
    Mar 22 at 19:27











  • @omegastripes : money

    – Juan
    Mar 22 at 19:30











  • If you really want to scrape a webpage with JavaScript, you might want to use Selenium or something that can run an actual headless browser.

    – Random Davis
    Mar 22 at 19:32






  • 1





    @Juan That is quite simple stuff, take a look at this

    – omegastripes
    Mar 25 at 21:23













1












1








1


1






I'm not very experienced in the world of scraping data, so the problem here may be obvious to some.



What I want is to scrape historical daily weather data from wunderground.com, without paying the API. Maybe it's not possible at all.



My method is simply to use requests.get and save the whole text into a file (code below).



Instead of getting the tables that can be accessed from the web browser (see image below), the result is a file that has almost everything but those tables. Something like this:



Summary

No data recorded
Daily Observations

No Data Recorded



What is weird is that if I save-as the web page with Firefox, the result depends on whether I choose 'web-page, only HTML' or 'web-page, complete': the latter includes the data I'm interested in, the former does not.



Is it possible that this is on purpose so nobody scrapes their data? I just wanted to make sure there is not a workaround this problem.



Thanks in advance,
Juan



Note: I tried using the user-agent field to no avail.



# Note: I run > set PYTHONIOENCODING=utf-8 before executing python
import requests

# URL with wunderground weather information for a specific date:
date = '2019-03-12'
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/' + date
r = requests.get(url)

# Write a file to check if the tables ar being retrieved:
with open('test.html', 'wb') as testfile:
testfile.write(r.text.encode('utf-8'))


Screenshot of the tables I want to scrape.




UPDATE: FOUND A SOLUTION



Thanks to pointing me to the selenium module, it is the exact solution I needed. The code extracts all the tables present on the URL of a given date (as seen when visiting the site normally). It needs modifications in order to be able to scrape over a list of dates and organize the CSV files created.



Note: geckodriver.exe is needed in the working directory.



from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
import requests, sys, re

# URL with wunderground weather information
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-3-12'

# Commands related to the webdriver (not sure what they do, but I can guess):
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
br = webdriver.Firefox(firefox_binary=bi)

# This starts an instance of Firefox at the specified URL:
br.get(url)

# I understand that at this point the data is in html format and can be
# extracted with BeautifulSoup:
sopa = BeautifulSoup(br.page_source, 'lxml')

# Close the firefox instance started before:
br.quit()

# I'm only interested in the tables contained on the page:
tablas = sopa.find_all('table')

# Write all the tables into csv files:
for i in range(len(tablas)):
out_file = open('wunderground' + str(i + 1) + '.csv', 'w')
tabla = tablas[i]

# ---- Write the table header: ----
table_head = tabla.findAll('th')
output_head = []
for head in table_head:
output_head.append(head.text.strip())

# Some cleaning and formatting of the text before writing:
encabezado = '"' + '";"'.join(output_head) + '"'
encabezado = re.sub('s', '', encabezado) + 'n'
out_file.write(encabezado.encode(encoding='UTF-8'))

# ---- Write the rows: ----
output_rows = []
filas = tabla.findAll('tr')
for j in range(1, len(filas)):
table_row = filas[j]
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text.strip())

# Some cleaning and formatting of the text before writing:
fila = '"' + '";"'.join(output_row) + '"'
fila = re.sub('s', '', fila) + 'n'
out_file.write(fila.encode(encoding='UTF-8'))

out_file.close()



Extra: the answer of @QHarr works beautifully, but I needed a couple of modifications to use it, because I use firefox in my PC. It's important to note that for this to work I had to add the geckodriver.exe file into my working directory. Here's the code:



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
driver = webdriver.Firefox(firefox_binary=bi)
# driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))









share|improve this question
















I'm not very experienced in the world of scraping data, so the problem here may be obvious to some.



What I want is to scrape historical daily weather data from wunderground.com, without paying the API. Maybe it's not possible at all.



My method is simply to use requests.get and save the whole text into a file (code below).



Instead of getting the tables that can be accessed from the web browser (see image below), the result is a file that has almost everything but those tables. Something like this:



Summary

No data recorded
Daily Observations

No Data Recorded



What is weird is that if I save-as the web page with Firefox, the result depends on whether I choose 'web-page, only HTML' or 'web-page, complete': the latter includes the data I'm interested in, the former does not.



Is it possible that this is on purpose so nobody scrapes their data? I just wanted to make sure there is not a workaround this problem.



Thanks in advance,
Juan



Note: I tried using the user-agent field to no avail.



# Note: I run > set PYTHONIOENCODING=utf-8 before executing python
import requests

# URL with wunderground weather information for a specific date:
date = '2019-03-12'
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/' + date
r = requests.get(url)

# Write a file to check if the tables ar being retrieved:
with open('test.html', 'wb') as testfile:
testfile.write(r.text.encode('utf-8'))


Screenshot of the tables I want to scrape.




UPDATE: FOUND A SOLUTION



Thanks to pointing me to the selenium module, it is the exact solution I needed. The code extracts all the tables present on the URL of a given date (as seen when visiting the site normally). It needs modifications in order to be able to scrape over a list of dates and organize the CSV files created.



Note: geckodriver.exe is needed in the working directory.



from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
import requests, sys, re

# URL with wunderground weather information
url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-3-12'

# Commands related to the webdriver (not sure what they do, but I can guess):
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
br = webdriver.Firefox(firefox_binary=bi)

# This starts an instance of Firefox at the specified URL:
br.get(url)

# I understand that at this point the data is in html format and can be
# extracted with BeautifulSoup:
sopa = BeautifulSoup(br.page_source, 'lxml')

# Close the firefox instance started before:
br.quit()

# I'm only interested in the tables contained on the page:
tablas = sopa.find_all('table')

# Write all the tables into csv files:
for i in range(len(tablas)):
out_file = open('wunderground' + str(i + 1) + '.csv', 'w')
tabla = tablas[i]

# ---- Write the table header: ----
table_head = tabla.findAll('th')
output_head = []
for head in table_head:
output_head.append(head.text.strip())

# Some cleaning and formatting of the text before writing:
encabezado = '"' + '";"'.join(output_head) + '"'
encabezado = re.sub('s', '', encabezado) + 'n'
out_file.write(encabezado.encode(encoding='UTF-8'))

# ---- Write the rows: ----
output_rows = []
filas = tabla.findAll('tr')
for j in range(1, len(filas)):
table_row = filas[j]
columns = table_row.findAll('td')
output_row = []
for column in columns:
output_row.append(column.text.strip())

# Some cleaning and formatting of the text before writing:
fila = '"' + '";"'.join(output_row) + '"'
fila = re.sub('s', '', fila) + 'n'
out_file.write(fila.encode(encoding='UTF-8'))

out_file.close()



Extra: the answer of @QHarr works beautifully, but I needed a couple of modifications to use it, because I use firefox in my PC. It's important to note that for this to work I had to add the geckodriver.exe file into my working directory. Here's the code:



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
bi = FirefoxBinary(r'C:Program Files (x86)Mozilla Firefox\firefox.exe')
driver = webdriver.Firefox(firefox_binary=bi)
# driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))






python web-scraping wunderground






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 25 at 18:53







Juan

















asked Mar 22 at 19:06









JuanJuan

680723




680723







  • 2





    This is a common scraping issue - most modern webpages are heavily reliant on javascript, which requires a VM to execute inside of. When you use requests, or curl, all you get is the raw html, without any of the functionality that the javascript provides. A good workaround for scraping is to use the selenium library, which gives you that javascript VM. It's a steep learning curve, but well worth it.

    – Danielle M.
    Mar 22 at 19:19






  • 1





    Why don't you want to use the API? Аctually scraping via API much more easy and reliable.

    – omegastripes
    Mar 22 at 19:27











  • @omegastripes : money

    – Juan
    Mar 22 at 19:30











  • If you really want to scrape a webpage with JavaScript, you might want to use Selenium or something that can run an actual headless browser.

    – Random Davis
    Mar 22 at 19:32






  • 1





    @Juan That is quite simple stuff, take a look at this

    – omegastripes
    Mar 25 at 21:23












  • 2





    This is a common scraping issue - most modern webpages are heavily reliant on javascript, which requires a VM to execute inside of. When you use requests, or curl, all you get is the raw html, without any of the functionality that the javascript provides. A good workaround for scraping is to use the selenium library, which gives you that javascript VM. It's a steep learning curve, but well worth it.

    – Danielle M.
    Mar 22 at 19:19






  • 1





    Why don't you want to use the API? Аctually scraping via API much more easy and reliable.

    – omegastripes
    Mar 22 at 19:27











  • @omegastripes : money

    – Juan
    Mar 22 at 19:30











  • If you really want to scrape a webpage with JavaScript, you might want to use Selenium or something that can run an actual headless browser.

    – Random Davis
    Mar 22 at 19:32






  • 1





    @Juan That is quite simple stuff, take a look at this

    – omegastripes
    Mar 25 at 21:23







2




2





This is a common scraping issue - most modern webpages are heavily reliant on javascript, which requires a VM to execute inside of. When you use requests, or curl, all you get is the raw html, without any of the functionality that the javascript provides. A good workaround for scraping is to use the selenium library, which gives you that javascript VM. It's a steep learning curve, but well worth it.

– Danielle M.
Mar 22 at 19:19





This is a common scraping issue - most modern webpages are heavily reliant on javascript, which requires a VM to execute inside of. When you use requests, or curl, all you get is the raw html, without any of the functionality that the javascript provides. A good workaround for scraping is to use the selenium library, which gives you that javascript VM. It's a steep learning curve, but well worth it.

– Danielle M.
Mar 22 at 19:19




1




1





Why don't you want to use the API? Аctually scraping via API much more easy and reliable.

– omegastripes
Mar 22 at 19:27





Why don't you want to use the API? Аctually scraping via API much more easy and reliable.

– omegastripes
Mar 22 at 19:27













@omegastripes : money

– Juan
Mar 22 at 19:30





@omegastripes : money

– Juan
Mar 22 at 19:30













If you really want to scrape a webpage with JavaScript, you might want to use Selenium or something that can run an actual headless browser.

– Random Davis
Mar 22 at 19:32





If you really want to scrape a webpage with JavaScript, you might want to use Selenium or something that can run an actual headless browser.

– Random Davis
Mar 22 at 19:32




1




1





@Juan That is quite simple stuff, take a look at this

– omegastripes
Mar 25 at 21:23





@Juan That is quite simple stuff, take a look at this

– omegastripes
Mar 25 at 21:23












2 Answers
2






active

oldest

votes


















1














you could use selenium to ensure page load then pandas read_html to get tables



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))





share|improve this answer


















  • 1





    This works beautifully, with some modifications for Firefox. Added an adapted version of your code in my post. Thanks!

    – Juan
    Mar 25 at 18:54


















0














Another direction: Use the API calls that the website is doing.



(The HTTP call was taken from Chrome developer tools)



Example:



HTTP GET https://api-ak.wunderground.com/api/d8585d80376a429e/history_20180812/lang:EN/units:english/bestfct:1/v:2.0/q/HSSS.json?showObs=0&ttl=120


Response




"response":
"version": "2.0",
"units": "english",
"termsofService": "https://www.wunderground.com/weather/api/d/terms.html",
"attribution":
"image":"//icons.wxug.com/graphics/wu2/logo_130x80.png",
"title":"Weather Underground",
"link":"http://www.wunderground.com"
,
"features":
"history": 1

, "location":
"name": "Khartoum",
"neighborhood":null,
"city": "Khartoum",
"state": null,
"state_name":"Sudan",
"country": "SD",
"country_iso3166":"SA",
"country_name":"Saudi Arabia",
"continent":"AS",
"zip":"00000",
"magic":"474",
"wmo":"62721",
"radarcode":"xxx",
"radarregion_ic":null,
"radarregion_link": "//",
"latitude":15.60000038,
"longitude":32.54999924,
"elevation":null,
"wfo": null,
"l": "/q/zmw:00000.474.62721",
"canonical": "/weather/sa/khartoum"
,
"date":
"epoch": 1553287561,
"pretty": "11:46 PM EAT on March 22, 2019",
"rfc822": "Fri, 22 Mar 2019 23:46:01 +0300",
"iso8601": "2019-03-22T23:46:01+0300",
"year": 2019,
"month": 3,
"day": 22,
"yday": 80,
"hour": 23,
"min": "46",
"sec": 1,
"monthname": "March",
"monthname_short": "Mar",
"weekday": "Friday",
"weekday_short": "Fri",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00


,
"history":
"start_date":
"epoch": 1534064400,
"pretty": "12:00 PM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 12:00:00 +0300",
"iso8601": "2018-08-12T12:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 12,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"end_date":
"epoch": null,
"pretty": null,
"rfc822": null,
"iso8601": null,
"year": null,
"month": null,
"day": null,
"yday": null,
"hour": null,
"min": null,
"sec": null,
"monthname": null,
"monthname_short": null,
"weekday": null,
"weekday_short": null,
"ampm": null,
"tz_short": null,
"tz_long": null,
"tz_offset_text": null,
"tz_offset_hours": null
,
"days": [

"summary":
"date":
"epoch": 1534021200,
"pretty": "12:00 AM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 00:00:00 +0300",
"iso8601": "2018-08-12T00:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 0,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "AM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"temperature": 82,
"dewpoint": 66,
"pressure": 29.94,
"wind_speed": 11,
"wind_dir": "SSE",
"wind_dir_degrees": 166,
"visibility": 5.9,
"humidity": 57,
"max_temperature": 89,
"min_temperature": 75,
"temperature_normal": null,
"min_temperature_normal": null,
"max_temperature_normal": null,
"min_temperature_record": null,
"max_temperature_record": null,
"min_temperature_record_year": null,
"max_temperature_record_year": null,
"max_humidity": 83,
"min_humidity": 40,
"max_dewpoint": 70,
"min_dewpoint": 63,
"max_pressure": 29.98,
"min_pressure": 29.89,
"max_wind_speed": 22,
"min_wind_speed": 5,
"max_visibility": 6.2,
"min_visibility": 1.9,
"fog": 0,
"hail": 0,
"snow": 0,
"rain": 1,
"thunder": 0,
"tornado": 0,
"snowfall": null,
"monthtodatesnowfall": null,
"since1julsnowfall": null,
"snowdepth": null,
"precip": 0.00,
"preciprecord": null,
"preciprecordyear": null,
"precipnormal": null,
"since1janprecipitation": null,
"since1janprecipitationnormal": null,
"monthtodateprecipitation": null,
"monthtodateprecipitationnormal": null,
"precipsource": "3Or6HourObs",
"gdegreedays": 32,
"heatingdegreedays": 0,
"coolingdegreedays": 17,
"heatingdegreedaysnormal": null,
"monthtodateheatingdegreedays": null,
"monthtodateheatingdegreedaysnormal": null,
"since1sepheatingdegreedays": null,
"since1sepheatingdegreedaysnormal": null,
"since1julheatingdegreedays": null,
"since1julheatingdegreedaysnormal": null,
"coolingdegreedaysnormal": null,
"monthtodatecoolingdegreedays": null,
"monthtodatecoolingdegreedaysnormal": null,
"since1sepcoolingdegreedays": null,
"since1sepcoolingdegreedaysnormal": null,
"since1jancoolingdegreedays": null,
"since1jancoolingdegreedaysnormal": null
,
"avgoktas": 5,
"icon": "rain"


]







share|improve this answer























  • Hi, thanks for this, but unfortunately I cannot understand this with my current knowledge. How would I run those commands? Is it windows cmd, unix command line, something else?

    – Juan
    Mar 25 at 18:57











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55306320%2fscraping-wunderground-without-api-using-python%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














you could use selenium to ensure page load then pandas read_html to get tables



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))





share|improve this answer


















  • 1





    This works beautifully, with some modifications for Firefox. Added an adapted version of your code in my post. Thanks!

    – Juan
    Mar 25 at 18:54















1














you could use selenium to ensure page load then pandas read_html to get tables



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))





share|improve this answer


















  • 1





    This works beautifully, with some modifications for Firefox. Added an adapted version of your code in my post. Thanks!

    – Juan
    Mar 25 at 18:54













1












1








1







you could use selenium to ensure page load then pandas read_html to get tables



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))





share|improve this answer













you could use selenium to ensure page load then pandas read_html to get tables



from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

url = 'https://www.wunderground.com/history/daily/sd/khartoum/HSSS/date/2019-03-12'
driver = webdriver.Chrome()
driver.get(url)
tables = WebDriverWait(driver,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table")))
for table in tables:
newTable = pd.read_html(table.get_attribute('outerHTML'))
if newTable:
print(newTable[0].fillna(''))






share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 22 at 20:25









QHarrQHarr

40.1k82446




40.1k82446







  • 1





    This works beautifully, with some modifications for Firefox. Added an adapted version of your code in my post. Thanks!

    – Juan
    Mar 25 at 18:54












  • 1





    This works beautifully, with some modifications for Firefox. Added an adapted version of your code in my post. Thanks!

    – Juan
    Mar 25 at 18:54







1




1





This works beautifully, with some modifications for Firefox. Added an adapted version of your code in my post. Thanks!

– Juan
Mar 25 at 18:54





This works beautifully, with some modifications for Firefox. Added an adapted version of your code in my post. Thanks!

– Juan
Mar 25 at 18:54













0














Another direction: Use the API calls that the website is doing.



(The HTTP call was taken from Chrome developer tools)



Example:



HTTP GET https://api-ak.wunderground.com/api/d8585d80376a429e/history_20180812/lang:EN/units:english/bestfct:1/v:2.0/q/HSSS.json?showObs=0&ttl=120


Response




"response":
"version": "2.0",
"units": "english",
"termsofService": "https://www.wunderground.com/weather/api/d/terms.html",
"attribution":
"image":"//icons.wxug.com/graphics/wu2/logo_130x80.png",
"title":"Weather Underground",
"link":"http://www.wunderground.com"
,
"features":
"history": 1

, "location":
"name": "Khartoum",
"neighborhood":null,
"city": "Khartoum",
"state": null,
"state_name":"Sudan",
"country": "SD",
"country_iso3166":"SA",
"country_name":"Saudi Arabia",
"continent":"AS",
"zip":"00000",
"magic":"474",
"wmo":"62721",
"radarcode":"xxx",
"radarregion_ic":null,
"radarregion_link": "//",
"latitude":15.60000038,
"longitude":32.54999924,
"elevation":null,
"wfo": null,
"l": "/q/zmw:00000.474.62721",
"canonical": "/weather/sa/khartoum"
,
"date":
"epoch": 1553287561,
"pretty": "11:46 PM EAT on March 22, 2019",
"rfc822": "Fri, 22 Mar 2019 23:46:01 +0300",
"iso8601": "2019-03-22T23:46:01+0300",
"year": 2019,
"month": 3,
"day": 22,
"yday": 80,
"hour": 23,
"min": "46",
"sec": 1,
"monthname": "March",
"monthname_short": "Mar",
"weekday": "Friday",
"weekday_short": "Fri",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00


,
"history":
"start_date":
"epoch": 1534064400,
"pretty": "12:00 PM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 12:00:00 +0300",
"iso8601": "2018-08-12T12:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 12,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"end_date":
"epoch": null,
"pretty": null,
"rfc822": null,
"iso8601": null,
"year": null,
"month": null,
"day": null,
"yday": null,
"hour": null,
"min": null,
"sec": null,
"monthname": null,
"monthname_short": null,
"weekday": null,
"weekday_short": null,
"ampm": null,
"tz_short": null,
"tz_long": null,
"tz_offset_text": null,
"tz_offset_hours": null
,
"days": [

"summary":
"date":
"epoch": 1534021200,
"pretty": "12:00 AM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 00:00:00 +0300",
"iso8601": "2018-08-12T00:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 0,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "AM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"temperature": 82,
"dewpoint": 66,
"pressure": 29.94,
"wind_speed": 11,
"wind_dir": "SSE",
"wind_dir_degrees": 166,
"visibility": 5.9,
"humidity": 57,
"max_temperature": 89,
"min_temperature": 75,
"temperature_normal": null,
"min_temperature_normal": null,
"max_temperature_normal": null,
"min_temperature_record": null,
"max_temperature_record": null,
"min_temperature_record_year": null,
"max_temperature_record_year": null,
"max_humidity": 83,
"min_humidity": 40,
"max_dewpoint": 70,
"min_dewpoint": 63,
"max_pressure": 29.98,
"min_pressure": 29.89,
"max_wind_speed": 22,
"min_wind_speed": 5,
"max_visibility": 6.2,
"min_visibility": 1.9,
"fog": 0,
"hail": 0,
"snow": 0,
"rain": 1,
"thunder": 0,
"tornado": 0,
"snowfall": null,
"monthtodatesnowfall": null,
"since1julsnowfall": null,
"snowdepth": null,
"precip": 0.00,
"preciprecord": null,
"preciprecordyear": null,
"precipnormal": null,
"since1janprecipitation": null,
"since1janprecipitationnormal": null,
"monthtodateprecipitation": null,
"monthtodateprecipitationnormal": null,
"precipsource": "3Or6HourObs",
"gdegreedays": 32,
"heatingdegreedays": 0,
"coolingdegreedays": 17,
"heatingdegreedaysnormal": null,
"monthtodateheatingdegreedays": null,
"monthtodateheatingdegreedaysnormal": null,
"since1sepheatingdegreedays": null,
"since1sepheatingdegreedaysnormal": null,
"since1julheatingdegreedays": null,
"since1julheatingdegreedaysnormal": null,
"coolingdegreedaysnormal": null,
"monthtodatecoolingdegreedays": null,
"monthtodatecoolingdegreedaysnormal": null,
"since1sepcoolingdegreedays": null,
"since1sepcoolingdegreedaysnormal": null,
"since1jancoolingdegreedays": null,
"since1jancoolingdegreedaysnormal": null
,
"avgoktas": 5,
"icon": "rain"


]







share|improve this answer























  • Hi, thanks for this, but unfortunately I cannot understand this with my current knowledge. How would I run those commands? Is it windows cmd, unix command line, something else?

    – Juan
    Mar 25 at 18:57















0














Another direction: Use the API calls that the website is doing.



(The HTTP call was taken from Chrome developer tools)



Example:



HTTP GET https://api-ak.wunderground.com/api/d8585d80376a429e/history_20180812/lang:EN/units:english/bestfct:1/v:2.0/q/HSSS.json?showObs=0&ttl=120


Response




"response":
"version": "2.0",
"units": "english",
"termsofService": "https://www.wunderground.com/weather/api/d/terms.html",
"attribution":
"image":"//icons.wxug.com/graphics/wu2/logo_130x80.png",
"title":"Weather Underground",
"link":"http://www.wunderground.com"
,
"features":
"history": 1

, "location":
"name": "Khartoum",
"neighborhood":null,
"city": "Khartoum",
"state": null,
"state_name":"Sudan",
"country": "SD",
"country_iso3166":"SA",
"country_name":"Saudi Arabia",
"continent":"AS",
"zip":"00000",
"magic":"474",
"wmo":"62721",
"radarcode":"xxx",
"radarregion_ic":null,
"radarregion_link": "//",
"latitude":15.60000038,
"longitude":32.54999924,
"elevation":null,
"wfo": null,
"l": "/q/zmw:00000.474.62721",
"canonical": "/weather/sa/khartoum"
,
"date":
"epoch": 1553287561,
"pretty": "11:46 PM EAT on March 22, 2019",
"rfc822": "Fri, 22 Mar 2019 23:46:01 +0300",
"iso8601": "2019-03-22T23:46:01+0300",
"year": 2019,
"month": 3,
"day": 22,
"yday": 80,
"hour": 23,
"min": "46",
"sec": 1,
"monthname": "March",
"monthname_short": "Mar",
"weekday": "Friday",
"weekday_short": "Fri",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00


,
"history":
"start_date":
"epoch": 1534064400,
"pretty": "12:00 PM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 12:00:00 +0300",
"iso8601": "2018-08-12T12:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 12,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"end_date":
"epoch": null,
"pretty": null,
"rfc822": null,
"iso8601": null,
"year": null,
"month": null,
"day": null,
"yday": null,
"hour": null,
"min": null,
"sec": null,
"monthname": null,
"monthname_short": null,
"weekday": null,
"weekday_short": null,
"ampm": null,
"tz_short": null,
"tz_long": null,
"tz_offset_text": null,
"tz_offset_hours": null
,
"days": [

"summary":
"date":
"epoch": 1534021200,
"pretty": "12:00 AM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 00:00:00 +0300",
"iso8601": "2018-08-12T00:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 0,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "AM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"temperature": 82,
"dewpoint": 66,
"pressure": 29.94,
"wind_speed": 11,
"wind_dir": "SSE",
"wind_dir_degrees": 166,
"visibility": 5.9,
"humidity": 57,
"max_temperature": 89,
"min_temperature": 75,
"temperature_normal": null,
"min_temperature_normal": null,
"max_temperature_normal": null,
"min_temperature_record": null,
"max_temperature_record": null,
"min_temperature_record_year": null,
"max_temperature_record_year": null,
"max_humidity": 83,
"min_humidity": 40,
"max_dewpoint": 70,
"min_dewpoint": 63,
"max_pressure": 29.98,
"min_pressure": 29.89,
"max_wind_speed": 22,
"min_wind_speed": 5,
"max_visibility": 6.2,
"min_visibility": 1.9,
"fog": 0,
"hail": 0,
"snow": 0,
"rain": 1,
"thunder": 0,
"tornado": 0,
"snowfall": null,
"monthtodatesnowfall": null,
"since1julsnowfall": null,
"snowdepth": null,
"precip": 0.00,
"preciprecord": null,
"preciprecordyear": null,
"precipnormal": null,
"since1janprecipitation": null,
"since1janprecipitationnormal": null,
"monthtodateprecipitation": null,
"monthtodateprecipitationnormal": null,
"precipsource": "3Or6HourObs",
"gdegreedays": 32,
"heatingdegreedays": 0,
"coolingdegreedays": 17,
"heatingdegreedaysnormal": null,
"monthtodateheatingdegreedays": null,
"monthtodateheatingdegreedaysnormal": null,
"since1sepheatingdegreedays": null,
"since1sepheatingdegreedaysnormal": null,
"since1julheatingdegreedays": null,
"since1julheatingdegreedaysnormal": null,
"coolingdegreedaysnormal": null,
"monthtodatecoolingdegreedays": null,
"monthtodatecoolingdegreedaysnormal": null,
"since1sepcoolingdegreedays": null,
"since1sepcoolingdegreedaysnormal": null,
"since1jancoolingdegreedays": null,
"since1jancoolingdegreedaysnormal": null
,
"avgoktas": 5,
"icon": "rain"


]







share|improve this answer























  • Hi, thanks for this, but unfortunately I cannot understand this with my current knowledge. How would I run those commands? Is it windows cmd, unix command line, something else?

    – Juan
    Mar 25 at 18:57













0












0








0







Another direction: Use the API calls that the website is doing.



(The HTTP call was taken from Chrome developer tools)



Example:



HTTP GET https://api-ak.wunderground.com/api/d8585d80376a429e/history_20180812/lang:EN/units:english/bestfct:1/v:2.0/q/HSSS.json?showObs=0&ttl=120


Response




"response":
"version": "2.0",
"units": "english",
"termsofService": "https://www.wunderground.com/weather/api/d/terms.html",
"attribution":
"image":"//icons.wxug.com/graphics/wu2/logo_130x80.png",
"title":"Weather Underground",
"link":"http://www.wunderground.com"
,
"features":
"history": 1

, "location":
"name": "Khartoum",
"neighborhood":null,
"city": "Khartoum",
"state": null,
"state_name":"Sudan",
"country": "SD",
"country_iso3166":"SA",
"country_name":"Saudi Arabia",
"continent":"AS",
"zip":"00000",
"magic":"474",
"wmo":"62721",
"radarcode":"xxx",
"radarregion_ic":null,
"radarregion_link": "//",
"latitude":15.60000038,
"longitude":32.54999924,
"elevation":null,
"wfo": null,
"l": "/q/zmw:00000.474.62721",
"canonical": "/weather/sa/khartoum"
,
"date":
"epoch": 1553287561,
"pretty": "11:46 PM EAT on March 22, 2019",
"rfc822": "Fri, 22 Mar 2019 23:46:01 +0300",
"iso8601": "2019-03-22T23:46:01+0300",
"year": 2019,
"month": 3,
"day": 22,
"yday": 80,
"hour": 23,
"min": "46",
"sec": 1,
"monthname": "March",
"monthname_short": "Mar",
"weekday": "Friday",
"weekday_short": "Fri",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00


,
"history":
"start_date":
"epoch": 1534064400,
"pretty": "12:00 PM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 12:00:00 +0300",
"iso8601": "2018-08-12T12:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 12,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"end_date":
"epoch": null,
"pretty": null,
"rfc822": null,
"iso8601": null,
"year": null,
"month": null,
"day": null,
"yday": null,
"hour": null,
"min": null,
"sec": null,
"monthname": null,
"monthname_short": null,
"weekday": null,
"weekday_short": null,
"ampm": null,
"tz_short": null,
"tz_long": null,
"tz_offset_text": null,
"tz_offset_hours": null
,
"days": [

"summary":
"date":
"epoch": 1534021200,
"pretty": "12:00 AM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 00:00:00 +0300",
"iso8601": "2018-08-12T00:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 0,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "AM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"temperature": 82,
"dewpoint": 66,
"pressure": 29.94,
"wind_speed": 11,
"wind_dir": "SSE",
"wind_dir_degrees": 166,
"visibility": 5.9,
"humidity": 57,
"max_temperature": 89,
"min_temperature": 75,
"temperature_normal": null,
"min_temperature_normal": null,
"max_temperature_normal": null,
"min_temperature_record": null,
"max_temperature_record": null,
"min_temperature_record_year": null,
"max_temperature_record_year": null,
"max_humidity": 83,
"min_humidity": 40,
"max_dewpoint": 70,
"min_dewpoint": 63,
"max_pressure": 29.98,
"min_pressure": 29.89,
"max_wind_speed": 22,
"min_wind_speed": 5,
"max_visibility": 6.2,
"min_visibility": 1.9,
"fog": 0,
"hail": 0,
"snow": 0,
"rain": 1,
"thunder": 0,
"tornado": 0,
"snowfall": null,
"monthtodatesnowfall": null,
"since1julsnowfall": null,
"snowdepth": null,
"precip": 0.00,
"preciprecord": null,
"preciprecordyear": null,
"precipnormal": null,
"since1janprecipitation": null,
"since1janprecipitationnormal": null,
"monthtodateprecipitation": null,
"monthtodateprecipitationnormal": null,
"precipsource": "3Or6HourObs",
"gdegreedays": 32,
"heatingdegreedays": 0,
"coolingdegreedays": 17,
"heatingdegreedaysnormal": null,
"monthtodateheatingdegreedays": null,
"monthtodateheatingdegreedaysnormal": null,
"since1sepheatingdegreedays": null,
"since1sepheatingdegreedaysnormal": null,
"since1julheatingdegreedays": null,
"since1julheatingdegreedaysnormal": null,
"coolingdegreedaysnormal": null,
"monthtodatecoolingdegreedays": null,
"monthtodatecoolingdegreedaysnormal": null,
"since1sepcoolingdegreedays": null,
"since1sepcoolingdegreedaysnormal": null,
"since1jancoolingdegreedays": null,
"since1jancoolingdegreedaysnormal": null
,
"avgoktas": 5,
"icon": "rain"


]







share|improve this answer













Another direction: Use the API calls that the website is doing.



(The HTTP call was taken from Chrome developer tools)



Example:



HTTP GET https://api-ak.wunderground.com/api/d8585d80376a429e/history_20180812/lang:EN/units:english/bestfct:1/v:2.0/q/HSSS.json?showObs=0&ttl=120


Response




"response":
"version": "2.0",
"units": "english",
"termsofService": "https://www.wunderground.com/weather/api/d/terms.html",
"attribution":
"image":"//icons.wxug.com/graphics/wu2/logo_130x80.png",
"title":"Weather Underground",
"link":"http://www.wunderground.com"
,
"features":
"history": 1

, "location":
"name": "Khartoum",
"neighborhood":null,
"city": "Khartoum",
"state": null,
"state_name":"Sudan",
"country": "SD",
"country_iso3166":"SA",
"country_name":"Saudi Arabia",
"continent":"AS",
"zip":"00000",
"magic":"474",
"wmo":"62721",
"radarcode":"xxx",
"radarregion_ic":null,
"radarregion_link": "//",
"latitude":15.60000038,
"longitude":32.54999924,
"elevation":null,
"wfo": null,
"l": "/q/zmw:00000.474.62721",
"canonical": "/weather/sa/khartoum"
,
"date":
"epoch": 1553287561,
"pretty": "11:46 PM EAT on March 22, 2019",
"rfc822": "Fri, 22 Mar 2019 23:46:01 +0300",
"iso8601": "2019-03-22T23:46:01+0300",
"year": 2019,
"month": 3,
"day": 22,
"yday": 80,
"hour": 23,
"min": "46",
"sec": 1,
"monthname": "March",
"monthname_short": "Mar",
"weekday": "Friday",
"weekday_short": "Fri",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00


,
"history":
"start_date":
"epoch": 1534064400,
"pretty": "12:00 PM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 12:00:00 +0300",
"iso8601": "2018-08-12T12:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 12,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "PM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"end_date":
"epoch": null,
"pretty": null,
"rfc822": null,
"iso8601": null,
"year": null,
"month": null,
"day": null,
"yday": null,
"hour": null,
"min": null,
"sec": null,
"monthname": null,
"monthname_short": null,
"weekday": null,
"weekday_short": null,
"ampm": null,
"tz_short": null,
"tz_long": null,
"tz_offset_text": null,
"tz_offset_hours": null
,
"days": [

"summary":
"date":
"epoch": 1534021200,
"pretty": "12:00 AM EAT on August 12, 2018",
"rfc822": "Sun, 12 Aug 2018 00:00:00 +0300",
"iso8601": "2018-08-12T00:00:00+0300",
"year": 2018,
"month": 8,
"day": 12,
"yday": 223,
"hour": 0,
"min": "00",
"sec": 0,
"monthname": "August",
"monthname_short": "Aug",
"weekday": "Sunday",
"weekday_short": "Sun",
"ampm": "AM",
"tz_short": "EAT",
"tz_long": "Africa/Khartoum",
"tz_offset_text": "+0300",
"tz_offset_hours": 3.00
,
"temperature": 82,
"dewpoint": 66,
"pressure": 29.94,
"wind_speed": 11,
"wind_dir": "SSE",
"wind_dir_degrees": 166,
"visibility": 5.9,
"humidity": 57,
"max_temperature": 89,
"min_temperature": 75,
"temperature_normal": null,
"min_temperature_normal": null,
"max_temperature_normal": null,
"min_temperature_record": null,
"max_temperature_record": null,
"min_temperature_record_year": null,
"max_temperature_record_year": null,
"max_humidity": 83,
"min_humidity": 40,
"max_dewpoint": 70,
"min_dewpoint": 63,
"max_pressure": 29.98,
"min_pressure": 29.89,
"max_wind_speed": 22,
"min_wind_speed": 5,
"max_visibility": 6.2,
"min_visibility": 1.9,
"fog": 0,
"hail": 0,
"snow": 0,
"rain": 1,
"thunder": 0,
"tornado": 0,
"snowfall": null,
"monthtodatesnowfall": null,
"since1julsnowfall": null,
"snowdepth": null,
"precip": 0.00,
"preciprecord": null,
"preciprecordyear": null,
"precipnormal": null,
"since1janprecipitation": null,
"since1janprecipitationnormal": null,
"monthtodateprecipitation": null,
"monthtodateprecipitationnormal": null,
"precipsource": "3Or6HourObs",
"gdegreedays": 32,
"heatingdegreedays": 0,
"coolingdegreedays": 17,
"heatingdegreedaysnormal": null,
"monthtodateheatingdegreedays": null,
"monthtodateheatingdegreedaysnormal": null,
"since1sepheatingdegreedays": null,
"since1sepheatingdegreedaysnormal": null,
"since1julheatingdegreedays": null,
"since1julheatingdegreedaysnormal": null,
"coolingdegreedaysnormal": null,
"monthtodatecoolingdegreedays": null,
"monthtodatecoolingdegreedaysnormal": null,
"since1sepcoolingdegreedays": null,
"since1sepcoolingdegreedaysnormal": null,
"since1jancoolingdegreedays": null,
"since1jancoolingdegreedaysnormal": null
,
"avgoktas": 5,
"icon": "rain"


]








share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 22 at 20:50









baldermanbalderman

2,00011420




2,00011420












  • Hi, thanks for this, but unfortunately I cannot understand this with my current knowledge. How would I run those commands? Is it windows cmd, unix command line, something else?

    – Juan
    Mar 25 at 18:57

















  • Hi, thanks for this, but unfortunately I cannot understand this with my current knowledge. How would I run those commands? Is it windows cmd, unix command line, something else?

    – Juan
    Mar 25 at 18:57
















Hi, thanks for this, but unfortunately I cannot understand this with my current knowledge. How would I run those commands? Is it windows cmd, unix command line, something else?

– Juan
Mar 25 at 18:57





Hi, thanks for this, but unfortunately I cannot understand this with my current knowledge. How would I run those commands? Is it windows cmd, unix command line, something else?

– Juan
Mar 25 at 18:57

















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55306320%2fscraping-wunderground-without-api-using-python%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현