I'm using Python's Selenium lib for crawling and my scripts are Lambdas on AWS.
This is how my config looks like:
def get_headless_driver():
options = Options()
options.binary_location = '/opt/headless-chromium'
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--single-process')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--window-size=1920x1080')
options.add_argument('--start-maximized')
return webdriver.Chrome('/opt/chromedriver', chrome_options=options)
driver = get_headless_driver()
My script was running for a couple of months and then it stopped, with an error that it can not locate an element:
driver.find_element_by_xpath('//*[@id="x-widget-9"]/div[2]/div[1]/div/div/div[1]/div/div[8]/div[2]/div[1]/table/tbody[2]/tr/td[1]/div/div/div[1]').text
I have tried to change element (but it seams like it does not find anything). HTML that my driver gets is empty, and it looks like this:
<html xmlns="http://www.w3.org/1999/xhtml"><head></head><body></body></html>
I have tested it locally (on my computer, without AWS Lambdas) and everything works fine.
Do you know what can be an issue?
source https://stackoverflow.com/questions/74132020/python-selenium-chrome-headless-in-aws-lambda-can-not-find-an-element
Comments
Post a Comment