When calling driver.Navigate().GoToUrl(url);
, the code execution stops until the page is fully loaded. This is sometimes unnecessary when you just want to extract data.
Note: The code samples below could be considered hacks. There is no "official" way of doing this.
Create and launch a thread for loading a webpage, then use Wait.
C#
using (var driver = new ChromeDriver())
{
new Thread(() =>
{
driver.Navigate().GoToUrl("http://stackoverflow.com");
}).Start();
new WebDriverWait(driver, TimeSpan.FromSeconds(10))
.Until(ExpectedConditions.ElementIsVisible(By.XPath("//div[@class='summary']/h3/a")));
}
Using a WebDriverTimeout, you can load a page, and after a certain period of time, it will throw an exception, which will make the page stop loading. In the catch block, you can use Wait.
C#
using (var driver = new ChromeDriver())
{
driver.Manage().Timeouts().SetPageLoadTimeout(TimeSpan.FromSeconds(5));
try
{
driver.Navigate().GoToUrl("http://stackoverflow.com");
}
catch (WebDriverTimeoutException)
{
new WebDriverWait(driver, TimeSpan.FromSeconds(10))
.Until(ExpectedConditions.ElementIsVisible
(By.XPath("//div[@class='summary']/h3/a")));
}
}
The problem: When you set the timeout for too short, the page will stop loading regardless of whether is your desired element present. When you set the timeout for too long, you're going to negate the performance benefit.