Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
3833956
fix: London Borough Redbridge
m26dvd Feb 2, 2026
ce62fa2
fix: Harborough District Council
m26dvd Feb 2, 2026
a46ffb6
fix: Adding Hammersmith & Fulham
m26dvd Feb 2, 2026
24c50f8
Merge branch 'master' into master
m26dvd Feb 2, 2026
a9f94d4
docs: Update Councils.md from input.json
actions-user Feb 2, 2026
5dccfdc
fix: HarboroughDistrictCouncil
m26dvd Feb 2, 2026
9c45fde
fix: LondonBoroughHammersmithandFulham
m26dvd Feb 2, 2026
add90fc
fix: LondonBoroughHammersmithandFulham
m26dvd Feb 2, 2026
909dd20
chore: bump pip from 25.3 to 26.0
dependabot[bot] Feb 3, 2026
be071e2
fix: Powys Council
m26dvd Feb 5, 2026
8ed5d89
use useragent to avoid bromley headless browser block
oliyh Feb 6, 2026
c1c70e6
fix: Mid Suffolk District Council
m26dvd Feb 6, 2026
fb9fb43
Fix Richmond Upon Thames Council
sencercoltu Feb 7, 2026
3ac05f9
chore: bump pillow from 12.0.0 to 12.1.1
dependabot[bot] Feb 11, 2026
473c40b
fix: Bromley Borough Council
m26dvd Feb 11, 2026
8f2c1f0
fix: updated ID's for multiple elements that had changed
Chewbacca222222 Feb 12, 2026
ef6ec89
fix: Wakefield City Council
m26dvd Feb 13, 2026
8e91639
fix: NewhamCouncil - disable SSL verification to resolve certificate …
TalhaMangarah Feb 16, 2026
f0b83be
fix: NewhamCouncil - correct datetime parsing from DD/MM/YYYY to MM/D…
TalhaMangarah Feb 16, 2026
d05e2c7
feat: NewhamCouncil - add food waste collection scraping
TalhaMangarah Feb 16, 2026
db52375
fix: Redcar and Cleveland Council
m26dvd Feb 17, 2026
3b561bc
fix: nuneaton and bedworth
jpitcairn Feb 18, 2026
40b1624
fix: nuneaton and bedworth
jpitcairn Feb 18, 2026
f2b6ef9
Update CumberlandCouncil.py
makemelegal Feb 19, 2026
7e83de2
Update CumberlandCouncil.py
makemelegal Feb 19, 2026
1747edf
fix: Barking & Dagenham
m26dvd Feb 20, 2026
3aed4cc
fix: Cumberland Council
m26dvd Feb 20, 2026
36573c3
fix: North East Derbyshire District Council
m26dvd Feb 20, 2026
c5cf578
fix: Leeds City Council
m26dvd Feb 24, 2026
0da361a
chore: bump actions/upload-artifact from 6 to 7
dependabot[bot] Feb 27, 2026
56dfaa5
fix: London Borough Havering
m26dvd Feb 27, 2026
8f2823f
fix: Eastleigh Borough Council
m26dvd Feb 27, 2026
b4daf33
rename tool.poetry.dev-dependencies to tool.poetry.group.dev.dependen…
lb803 Feb 28, 2026
7e98316
remove the
lb803 Feb 28, 2026
3db89c8
fix: update address selection XPath for BroxbourneCouncil
teofanis Mar 3, 2026
04a919c
fix: Merton Council
m26dvd Mar 4, 2026
f23b71f
chore: bump docker/login-action from 3 to 4
dependabot[bot] Mar 5, 2026
6df46f8
chore: bump docker/build-push-action from 6 to 7
dependabot[bot] Mar 6, 2026
e2c78af
fix: Midlothian Council
m26dvd Mar 9, 2026
93fa9f8
fix: Hinckley & Bosworth Council
m26dvd Mar 9, 2026
3593f0d
fix: Bath and North East Somerset
m26dvd Mar 9, 2026
ceb5157
fix: Adding North Warwickshire Borough Council
m26dvd Mar 9, 2026
f7f1d6c
fix: Broxtowe Borough Council
m26dvd Mar 10, 2026
a0cdb2e
chore: bump black from 25.1.0 to 26.3.1
dependabot[bot] Mar 12, 2026
ae601b2
Update CumberlandCouncil.py
makemelegal Mar 12, 2026
9692ecd
Merge PR #1843: chore: bump pip from 25.3 to 26.0
robbrad Mar 14, 2026
a40bb9a
Merge PR #1847: fix: use useragent to avoid bromley headless browser …
robbrad Mar 14, 2026
c66b89d
Merge PR #1849: Fix Richmond Upon Thames Council
robbrad Mar 14, 2026
d129819
Merge PR #1852: chore: bump pillow from 12.0.0 to 12.1.1
robbrad Mar 14, 2026
8cf8c10
Merge PR #1854: fix: updated IDs for multiple elements (Swale)
robbrad Mar 14, 2026
ecd2bab
Merge PR #1856: fix: Newham Council fix
robbrad Mar 14, 2026
d45d733
Merge PR #1857: fix: Nuneaton and Bedworth
robbrad Mar 14, 2026
468de51
Merge PR #1860: Update CumberlandCouncil.py (kept #1841 HTML selector…
robbrad Mar 14, 2026
4fc80c5
Merge PR #1866: chore: bump actions/upload-artifact from 6 to 7
robbrad Mar 14, 2026
ddb9e48
Merge PR #1871: Chore: Move dev-dependencies to [tool.poetry.group.dev]
robbrad Mar 14, 2026
555480f
Merge PR #1873: fix: address selection in Broxbourne Council
robbrad Mar 14, 2026
f9df0c4
Merge PR #1877: chore: bump docker/login-action from 3 to 4
robbrad Mar 14, 2026
0a11729
Merge PR #1878: chore: bump docker/build-push-action from 6 to 7
robbrad Mar 14, 2026
026d167
Merge PR #1882: chore: bump black from 25.1.0 to 26.3.1
robbrad Mar 14, 2026
bb96408
chore: regenerate poetry.lock after dev-dependencies group rename
robbrad Mar 14, 2026
385c98c
chore: bump Chrome user agent strings to v134 across 37 council scrapers
robbrad Mar 14, 2026
58eb0ff
chore: bump remaining non-standard Chrome user agent strings to v134 …
robbrad Mar 14, 2026
284fdb1
Merge pull request #1883 from robbrad/march-2026-release
robbrad Mar 14, 2026
f5adb70
bump: version 0.163.0 → 0.164.0
github-actions[bot] Mar 14, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ha_compatibility_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ jobs:

- name: Upload HA log (always)
if: always()
uses: actions/upload-artifact@v6
uses: actions/upload-artifact@v7
with:
name: ha-log-${{ matrix.ha_version }}
path: home-assistant.log
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -90,13 +90,13 @@ jobs:
uses: actions/checkout@v6

- name: Login to Docker Hub
uses: docker/login-action@v3
uses: docker/login-action@v4
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_API_KEY }}

- name: Build and push Docker image
uses: docker/build-push-action@v6
uses: docker/build-push-action@v7
with:
context: ./uk_bin_collection_api_server
push: true
Expand Down
59 changes: 59 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,63 @@
=======
## 0.164.0 (2026-03-14)

### Feat

- NewhamCouncil - add food waste collection scraping

### Fix

- update address selection XPath for BroxbourneCouncil
- nuneaton and bedworth
- nuneaton and bedworth
- NewhamCouncil - correct datetime parsing from DD/MM/YYYY to MM/DD/YYYY
- NewhamCouncil - disable SSL verification to resolve certificate verification errors
- updated ID's for multiple elements that had changed
- Broxtowe Borough Council
- #1872 - Broxtowe Borough Council
- Adding North Warwickshire Borough Council
- #1869 - Adding North Warwickshire Borough Council
- Bath and North East Somerset
- #1876 - Bath and North East Somerset
- Hinckley & Bosworth Council
- #1879 - Hinckley & Bosworth Council
- Midlothian Council
- #1880 Midlothian Council
- Merton Council
- #1868 - Merton Council
- Eastleigh Borough Council
- #1867 - Eastleigh Borough Council
- London Borough Havering
- #1863 - London Borough Havering
- Leeds City Council
- #1864 - Leeds City Council
- North East Derbyshire District Council
- #1861 - North East Derbyshire District Council
- Cumberland Council
- #1858 - Cumberland Council
- Barking & Dagenham
- #1855 - Barking & Dagenham
- Redcar and Cleveland Council
- #1848 - Redcar and Cleveland Council
- Wakefield City Council
- #1853 - Wakefield City Council
- Bromley Borough Council
- #1851 Bromley Borough Council
- Mid Suffolk District Council
- #1845 - Mid Suffolk District Council
- Powys Council
- #1846 - Powys Council
- LondonBoroughHammersmithandFulham
- LondonBoroughHammersmithandFulham
- HarboroughDistrictCouncil
- HarboroughDistrictCouncil
- Adding Hammersmith & Fulham
- #1504 - Adding Hammersmith & Fulham
- Harborough District Council
- #1831 - Harborough District Council
- London Borough Redbridge
- #1836 - fix: London Borough Redbridge

## 0.163.0 (2026-02-02)

### Feat
Expand Down
2 changes: 1 addition & 1 deletion custom_components/uk_bin_collection/const.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

from homeassistant.const import Platform

INPUT_JSON_URL = "https://raw.githubusercontent.com/robbrad/UKBinCollectionData/0.163.0/uk_bin_collection/tests/input.json"
INPUT_JSON_URL = "https://raw.githubusercontent.com/robbrad/UKBinCollectionData/0.164.0/uk_bin_collection/tests/input.json"

DEFAULT_NAME = "UK Bin Collection Data"

Expand Down
4 changes: 2 additions & 2 deletions custom_components/uk_bin_collection/manifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"integration_type": "service",
"iot_class": "cloud_polling",
"issue_tracker": "https://github.com/robbrad/UKBinCollectionData/issues",
"requirements": ["uk-bin-collection>=0.163.0"],
"version": "0.163.0",
"requirements": ["uk-bin-collection>=0.164.0"],
"version": "0.164.0",
"zeroconf": []
}
547 changes: 214 additions & 333 deletions poetry.lock

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "uk_bin_collection"
version = "0.163.0"
version = "0.164.0"
description = "Python Lib to collect UK Bin Data"
readme = "README.md"
authors = ["Robert Bradley <robbrad182@gmail.com>"]
Expand All @@ -24,7 +24,7 @@ issues = "https://github.com/robbrad/UKBinCollectionData/issues"
requires = ["poetry-core>=1.2.0"]
build-backend = "poetry.core.masonry.api"

[tool.poetry.dev-dependencies]
[tool.poetry.group.dev.dependencies]
black = "*"
coverage = "*"
flake8 = "*"
Expand Down
22 changes: 18 additions & 4 deletions uk_bin_collection/tests/input.json
Original file line number Diff line number Diff line change
Expand Up @@ -1344,14 +1344,11 @@
"LAD24CD": "E07000121"
},
"LeedsCityCouncil": {
"house_number": "1",
"postcode": "LS6 2SE",
"skip_get_url": true,
"uprn": "72506983",
"url": "https://www.leeds.gov.uk/residents/bins-and-recycling/check-your-bin-day",
"web_driver": "http://selenium:4444",
"wiki_name": "Leeds",
"wiki_note": "Pass the house number, postcode, and UPRN. This parser requires a Selenium webdriver.",
"wiki_note": "Pass the UPRN.",
"LAD24CD": "E08000035"
},
"LeicesterCityCouncil": {
Expand Down Expand Up @@ -1420,6 +1417,14 @@
"wiki_note": "Pass the UPRN. You can find it using [FindMyAddress](https://www.findmyaddress.co.uk/search).",
"LAD24CD": "E09000009"
},
"LondonBoroughHammersmithandFulham": {
"postcode": "W12 0BQ",
"url": "https://www.lbhf.gov.uk/",
"wiki_command_url_override": "https://www.lbhf.gov.uk/",
"wiki_name": "Hammersmith & Fulham",
"wiki_note": "Pass only the property postcode",
"LAD24CD": "E09000013"
},
"LondonBoroughHarrow": {
"uprn": "100021298754",
"url": "https://www.harrow.gov.uk",
Expand Down Expand Up @@ -1843,6 +1848,15 @@
"wiki_note": "Pass the UPRN. You can find it using [FindMyAddress](https://www.findmyaddress.co.uk/search).",
"LAD24CD": "E06000057"
},
"NorthWarwickshireBoroughCouncil": {
"uprn": "10001179576",
"skip_get_url": true,
"url": "https://www.northwarks.gov.uk",
"web_driver": "http://selenium:4444",
"wiki_name": "North Warwickshire",
"wiki_note": "Pass the UPRN. You can find it using [FindMyAddress](https://www.findmyaddress.co.uk/search).",
"LAD24CD": "E07000220"
},
"NorwichCityCouncil": {
"house_number": "2",
"postcode": "NR2 3TT",
Expand Down
20 changes: 2 additions & 18 deletions uk_bin_collection/uk_bin_collection/councils/BarkingDagenham.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,23 +47,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
# Close popup if it exists
driver.switch_to.active_element.send_keys(Keys.ESCAPE)

# Handle cookie banner if present
wait = WebDriverWait(driver, 60)
try:
cookie_button = wait.until(
EC.element_to_be_clickable(
(
By.CSS_SELECTOR,
".agree-button.eu-cookie-compliance-secondary-button.button.button--small",
)
),
message="Cookie banner not found",
)
cookie_button.click()
print("Cookie banner clicked.")
time.sleep(1) # Brief pause to let banner disappear
except (TimeoutException, NoSuchElementException):
print("No cookie banner appeared or selector failed.")
wait = WebDriverWait(driver, 10)

# Enter postcode
print("Looking for postcode input...")
Expand All @@ -84,7 +68,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
EC.element_to_be_clickable((By.ID, "address")),
message="Address dropdown not found",
)

dropdown = Select(address_select)

found = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ def parse_data(self, page: str, **kwargs: Any) -> Dict[str, Any]:
"sec-fetch-site": "same-origin",
"sec-fetch-user": "?1",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.5993.118 Safari/537.36",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
}
form_data = {
"personInfo.person1.HouseNumberOrName": "",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "cross-site",
"Sec-Fetch-User": "?1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.5845.188 Safari/537.36",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
}

response = requests.get(
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import json
import ssl

import requests
import urllib3
from bs4 import BeautifulSoup

from uk_bin_collection.uk_bin_collection.common import *
from uk_bin_collection.uk_bin_collection.get_bin_data import AbstractGetBinDataClass
import ssl
import urllib3


class CustomHttpAdapter(requests.adapters.HTTPAdapter):
Expand Down Expand Up @@ -45,7 +47,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.5845.188 Safari/537.36",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
"X-Requested-With": "XMLHttpRequest",
}

Expand All @@ -56,7 +58,7 @@ def parse_data(self, page: str, **kwargs) -> dict:

requests.packages.urllib3.disable_warnings()
response = session.get(
f"https://www.bathnes.gov.uk/webapi/api/BinsAPI/v2/getbartecroute/{user_uprn}/true",
f"https://api.bathnes.gov.uk/webapi/api/BinsAPI/v2/BartecFeaturesandSchedules/CollectionSummary/{user_uprn}",
headers=headers,
)
if response.text == "":
Expand All @@ -68,30 +70,14 @@ def parse_data(self, page: str, **kwargs) -> dict:

data = {"bins": []}

if len(json_data["residualNextDate"]) > 0:
dict_data = {
"type": "Black Rubbish Bin",
"collectionDate": datetime.strptime(
json_data["residualNextDate"], "%Y-%m-%dT%H:%M:%S"
).strftime(date_format),
}
data["bins"].append(dict_data)
if len(json_data["recyclingNextDate"]) > 0:
dict_data = {
"type": "Recycling Containers",
"collectionDate": datetime.strptime(
json_data["recyclingNextDate"], "%Y-%m-%dT%H:%M:%S"
).strftime(date_format),
}
data["bins"].append(dict_data)
if len(json_data["organicNextDate"]) > 0:
dict_data = {
"type": "Garden Waste",
"collectionDate": datetime.strptime(
json_data["organicNextDate"], "%Y-%m-%dT%H:%M:%S"
).strftime(date_format),
}
data["bins"].append(dict_data)
for collection in json_data:
collection_date = datetime.fromisoformat(collection["nextCollectionDate"])
for feature in collection["features"]:
dict_data = {
"type": feature["featureDisplayName"],
"collectionDate": collection_date.strftime(date_format),
}
data["bins"].append(dict_data)

data["bins"].sort(
key=lambda x: datetime.strptime(x.get("collectionDate"), date_format)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
"Origin": "https://www.centralbedfordshire.gov.uk",
"Referer": "https://www.centralbedfordshire.gov.uk/info/163/bins_and_waste_collections_-_check_bin_collection_day",

"User-Agent": "Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.7968.1811 Mobile Safari/537.36",
"User-Agent": "Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.36",
}

files = {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def get_data(self, url: str) -> str:
# Set a user agent so we look like a browser ;-)
user_agent = (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/108.0.0.0 Safari/537.36"
"Chrome/134.0.0.0 Safari/537.36"
)
headers = {"User-Agent": user_agent}
requests.packages.urllib3.disable_warnings()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
check_postcode(user_postcode)

# Create Selenium webdriver
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36"
driver = create_webdriver(web_driver, headless, user_agent, __name__)
driver.get("https://www.boston.gov.uk/findwastecollections")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ def get_headers(base_url: str, method: str) -> dict[str, str]:
"Sec-Fetch-Dest": "document",
"Sec-Fetch-User": "?1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)"
" Chrome/109.0.0.0 Safari/537.36",
" Chrome/134.0.0.0 Safari/537.36",
}
if method.lower() == "post":
headers["Accept"] = "application/json, text/javascript, */*; q=0.01"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
"Sec-Fetch-User": "?1",
"Sec-GPC": "1",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
}
params = {
"ebp": "30",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def parse_data(self, page: str, **kwargs) -> dict:

headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
"Referer": "https://www.braintree.gov.uk/xfp/form/554",
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
"Sec-GPC": "1",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, "
"like Gecko) Chrome/105.0.0.0 Safari/537.36",
"like Gecko) Chrome/134.0.0.0 Safari/537.36",
}
service_type_params = {
"servicetypeid": "7dce896c-b3ba-ea11-a812-000d3a7f1cdc",
Expand All @@ -60,7 +60,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
"Sec-Fetch-Site": "cross-site",
"Sec-GPC": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, "
"like Gecko) Chrome/105.0.0.0 Safari/537.36",
"like Gecko) Chrome/134.0.0.0 Safari/537.36",
}
llpg_uprn = "UPRN" + user_uprn
llpg_json_data = {
Expand All @@ -86,7 +86,7 @@ def parse_data(self, page: str, **kwargs) -> dict:
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "cross-site",
"Sec-GPC": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
}
json_data = {
"uprn": user_uprn,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,9 @@ def parse_data(self, page: str, **kwargs) -> dict:
data = {"bins": []}

# Get our initial session running
driver = create_webdriver(web_driver, headless, None, __name__)
# the HeadlessChrome useragent is blocked and immediately returns a 503
user_agent = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
driver = create_webdriver(web_driver, headless, user_agent, __name__)
driver.get(kwargs.get("url"))

wait = WebDriverWait(driver, 30)
Expand Down
Loading
Loading